00:00:00.000 Started by upstream project "autotest-per-patch" build number 132331 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.164 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.165 The recommended git tool is: git 00:00:00.165 using credential 00000000-0000-0000-0000-000000000002 00:00:00.167 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.187 Fetching changes from the remote Git repository 00:00:00.188 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.211 Using shallow fetch with depth 1 00:00:00.211 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.211 > git --version # timeout=10 00:00:00.234 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.302 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.313 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.324 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.324 > git config core.sparsecheckout # timeout=10 00:00:07.335 > git read-tree -mu HEAD # timeout=10 00:00:07.351 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.372 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.372 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.472 [Pipeline] Start of Pipeline 00:00:07.487 [Pipeline] library 00:00:07.488 Loading library shm_lib@master 00:00:07.489 Library shm_lib@master is cached. Copying from home. 00:00:07.507 [Pipeline] node 00:00:07.515 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.517 [Pipeline] { 00:00:07.529 [Pipeline] catchError 00:00:07.531 [Pipeline] { 00:00:07.543 [Pipeline] wrap 00:00:07.551 [Pipeline] { 00:00:07.559 [Pipeline] stage 00:00:07.560 [Pipeline] { (Prologue) 00:00:07.751 [Pipeline] sh 00:00:08.035 + logger -p user.info -t JENKINS-CI 00:00:08.053 [Pipeline] echo 00:00:08.054 Node: WFP8 00:00:08.062 [Pipeline] sh 00:00:08.366 [Pipeline] setCustomBuildProperty 00:00:08.377 [Pipeline] echo 00:00:08.379 Cleanup processes 00:00:08.383 [Pipeline] sh 00:00:08.667 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.667 3178508 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.682 [Pipeline] sh 00:00:08.970 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.971 ++ grep -v 'sudo pgrep' 00:00:08.971 ++ awk '{print $1}' 00:00:08.971 + sudo kill -9 00:00:08.971 + true 00:00:08.985 [Pipeline] cleanWs 00:00:08.995 [WS-CLEANUP] Deleting project workspace... 00:00:08.995 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.001 [WS-CLEANUP] done 00:00:09.004 [Pipeline] setCustomBuildProperty 00:00:09.014 [Pipeline] sh 00:00:09.294 + sudo git config --global --replace-all safe.directory '*' 00:00:09.377 [Pipeline] httpRequest 00:00:09.768 [Pipeline] echo 00:00:09.771 Sorcerer 10.211.164.20 is alive 00:00:09.781 [Pipeline] retry 00:00:09.783 [Pipeline] { 00:00:09.798 [Pipeline] httpRequest 00:00:09.803 HttpMethod: GET 00:00:09.803 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.804 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.807 Response Code: HTTP/1.1 200 OK 00:00:09.808 Success: Status code 200 is in the accepted range: 200,404 00:00:09.808 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.440 [Pipeline] } 00:00:11.456 [Pipeline] // retry 00:00:11.462 [Pipeline] sh 00:00:11.747 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.762 [Pipeline] httpRequest 00:00:12.359 [Pipeline] echo 00:00:12.361 Sorcerer 10.211.164.20 is alive 00:00:12.370 [Pipeline] retry 00:00:12.372 [Pipeline] { 00:00:12.387 [Pipeline] httpRequest 00:00:12.391 HttpMethod: GET 00:00:12.392 URL: http://10.211.164.20/packages/spdk_ea8382642faed78f3a1604546f47016c2685b2db.tar.gz 00:00:12.392 Sending request to url: http://10.211.164.20/packages/spdk_ea8382642faed78f3a1604546f47016c2685b2db.tar.gz 00:00:12.417 Response Code: HTTP/1.1 200 OK 00:00:12.417 Success: Status code 200 is in the accepted range: 200,404 00:00:12.418 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ea8382642faed78f3a1604546f47016c2685b2db.tar.gz 00:00:45.302 [Pipeline] } 00:00:45.319 [Pipeline] // retry 00:00:45.327 [Pipeline] sh 00:00:45.618 + tar --no-same-owner -xf spdk_ea8382642faed78f3a1604546f47016c2685b2db.tar.gz 00:00:48.165 [Pipeline] sh 00:00:48.453 + git -C spdk log --oneline -n5 00:00:48.453 ea8382642 scripts/perf: Add env knob to disable power monitors 00:00:48.453 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:48.453 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:48.453 029355612 bdev_ut: add manual examine bdev unit test case 00:00:48.453 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:00:48.465 [Pipeline] } 00:00:48.480 [Pipeline] // stage 00:00:48.489 [Pipeline] stage 00:00:48.491 [Pipeline] { (Prepare) 00:00:48.507 [Pipeline] writeFile 00:00:48.523 [Pipeline] sh 00:00:48.808 + logger -p user.info -t JENKINS-CI 00:00:48.821 [Pipeline] sh 00:00:49.107 + logger -p user.info -t JENKINS-CI 00:00:49.120 [Pipeline] sh 00:00:49.406 + cat autorun-spdk.conf 00:00:49.406 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.406 SPDK_TEST_NVMF=1 00:00:49.406 SPDK_TEST_NVME_CLI=1 00:00:49.406 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.406 SPDK_TEST_NVMF_NICS=e810 00:00:49.406 SPDK_TEST_VFIOUSER=1 00:00:49.406 SPDK_RUN_UBSAN=1 00:00:49.406 NET_TYPE=phy 00:00:49.414 RUN_NIGHTLY=0 00:00:49.418 [Pipeline] readFile 00:00:49.442 [Pipeline] withEnv 00:00:49.444 [Pipeline] { 00:00:49.456 [Pipeline] sh 00:00:49.743 + set -ex 00:00:49.743 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:49.743 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:49.743 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.743 ++ SPDK_TEST_NVMF=1 00:00:49.743 ++ SPDK_TEST_NVME_CLI=1 00:00:49.743 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.743 ++ SPDK_TEST_NVMF_NICS=e810 00:00:49.743 ++ SPDK_TEST_VFIOUSER=1 00:00:49.743 ++ SPDK_RUN_UBSAN=1 00:00:49.743 ++ NET_TYPE=phy 00:00:49.743 ++ RUN_NIGHTLY=0 00:00:49.743 + case $SPDK_TEST_NVMF_NICS in 00:00:49.743 + DRIVERS=ice 00:00:49.743 + [[ tcp == \r\d\m\a ]] 00:00:49.743 + [[ -n ice ]] 00:00:49.743 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:49.743 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:49.743 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:49.743 rmmod: ERROR: Module irdma is not currently loaded 00:00:49.743 rmmod: ERROR: Module i40iw is not currently loaded 00:00:49.743 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:49.743 + true 00:00:49.744 + for D in $DRIVERS 00:00:49.744 + sudo modprobe ice 00:00:49.744 + exit 0 00:00:49.754 [Pipeline] } 00:00:49.768 [Pipeline] // withEnv 00:00:49.773 [Pipeline] } 00:00:49.786 [Pipeline] // stage 00:00:49.796 [Pipeline] catchError 00:00:49.798 [Pipeline] { 00:00:49.811 [Pipeline] timeout 00:00:49.811 Timeout set to expire in 1 hr 0 min 00:00:49.813 [Pipeline] { 00:00:49.826 [Pipeline] stage 00:00:49.828 [Pipeline] { (Tests) 00:00:49.842 [Pipeline] sh 00:00:50.131 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.131 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.131 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.131 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:50.131 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.131 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:50.131 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:50.131 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:50.131 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:50.131 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:50.131 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:50.131 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.131 + source /etc/os-release 00:00:50.131 ++ NAME='Fedora Linux' 00:00:50.131 ++ VERSION='39 (Cloud Edition)' 00:00:50.131 ++ ID=fedora 00:00:50.131 ++ VERSION_ID=39 00:00:50.131 ++ VERSION_CODENAME= 00:00:50.131 ++ PLATFORM_ID=platform:f39 00:00:50.131 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:50.131 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:50.131 ++ LOGO=fedora-logo-icon 00:00:50.131 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:50.131 ++ HOME_URL=https://fedoraproject.org/ 00:00:50.131 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:50.131 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:50.131 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:50.131 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:50.131 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:50.131 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:50.131 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:50.131 ++ SUPPORT_END=2024-11-12 00:00:50.131 ++ VARIANT='Cloud Edition' 00:00:50.131 ++ VARIANT_ID=cloud 00:00:50.131 + uname -a 00:00:50.131 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:50.131 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:52.671 Hugepages 00:00:52.671 node hugesize free / total 00:00:52.671 node0 1048576kB 0 / 0 00:00:52.671 node0 2048kB 0 / 0 00:00:52.671 node1 1048576kB 0 / 0 00:00:52.671 node1 2048kB 0 / 0 00:00:52.671 00:00:52.671 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:52.671 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:52.671 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:52.671 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:52.671 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:52.671 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:52.671 + rm -f /tmp/spdk-ld-path 00:00:52.671 + source autorun-spdk.conf 00:00:52.671 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.671 ++ SPDK_TEST_NVMF=1 00:00:52.671 ++ SPDK_TEST_NVME_CLI=1 00:00:52.671 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.671 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.671 ++ SPDK_TEST_VFIOUSER=1 00:00:52.671 ++ SPDK_RUN_UBSAN=1 00:00:52.671 ++ NET_TYPE=phy 00:00:52.671 ++ RUN_NIGHTLY=0 00:00:52.671 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:52.671 + [[ -n '' ]] 00:00:52.671 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.671 + for M in /var/spdk/build-*-manifest.txt 00:00:52.671 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:52.671 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.671 + for M in /var/spdk/build-*-manifest.txt 00:00:52.671 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:52.671 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.671 + for M in /var/spdk/build-*-manifest.txt 00:00:52.671 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:52.671 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.671 ++ uname 00:00:52.671 + [[ Linux == \L\i\n\u\x ]] 00:00:52.671 + sudo dmesg -T 00:00:52.931 + sudo dmesg --clear 00:00:52.931 + dmesg_pid=3179430 00:00:52.931 + [[ Fedora Linux == FreeBSD ]] 00:00:52.931 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.931 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.931 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:52.931 + [[ -x /usr/src/fio-static/fio ]] 00:00:52.931 + export FIO_BIN=/usr/src/fio-static/fio 00:00:52.931 + FIO_BIN=/usr/src/fio-static/fio 00:00:52.931 + sudo dmesg -Tw 00:00:52.931 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:52.931 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:52.931 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:52.931 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.931 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.931 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:52.931 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.931 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.931 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.931 17:18:55 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:52.931 17:18:55 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:52.931 17:18:55 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:52.931 17:18:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:52.931 17:18:55 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.931 17:18:55 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:52.931 17:18:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:52.931 17:18:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:52.932 17:18:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:52.932 17:18:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:52.932 17:18:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:52.932 17:18:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.932 17:18:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.932 17:18:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.932 17:18:55 -- paths/export.sh@5 -- $ export PATH 00:00:52.932 17:18:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.932 17:18:55 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:52.932 17:18:55 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:52.932 17:18:55 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732033135.XXXXXX 00:00:52.932 17:18:55 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732033135.MMbfnm 00:00:52.932 17:18:55 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:52.932 17:18:55 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:52.932 17:18:55 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:52.932 17:18:55 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:52.932 17:18:55 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:52.932 17:18:55 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:52.932 17:18:55 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:52.932 17:18:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.932 17:18:55 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:52.932 17:18:55 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:52.932 17:18:55 -- pm/common@17 -- $ local monitor 00:00:52.932 17:18:55 -- pm/common@19 -- $ [[ -z '' ]] 00:00:52.932 17:18:55 -- pm/common@21 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.932 17:18:55 -- pm/common@21 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.932 17:18:55 -- pm/common@21 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.932 17:18:55 -- pm/common@23 -- $ date +%s 00:00:52.932 17:18:55 -- pm/common@21 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.932 17:18:55 -- pm/common@23 -- $ date +%s 00:00:52.932 17:18:55 -- pm/common@27 -- $ sleep 1 00:00:52.932 17:18:55 -- pm/common@23 -- $ date +%s 00:00:52.932 17:18:55 -- pm/common@23 -- $ date +%s 00:00:52.932 17:18:55 -- pm/common@23 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732033135 00:00:52.932 17:18:55 -- pm/common@23 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732033135 00:00:52.932 17:18:55 -- pm/common@23 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732033135 00:00:52.932 17:18:55 -- pm/common@23 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732033135 00:00:53.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732033135_collect-vmstat.pm.log 00:00:53.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732033135_collect-cpu-load.pm.log 00:00:53.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732033135_collect-cpu-temp.pm.log 00:00:53.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732033135_collect-bmc-pm.bmc.pm.log 00:00:54.129 17:18:56 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:54.129 17:18:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:54.129 17:18:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:54.129 17:18:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.129 17:18:56 -- spdk/autobuild.sh@16 -- $ date -u 00:00:54.129 Tue Nov 19 04:18:56 PM UTC 2024 00:00:54.129 17:18:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:54.129 v25.01-pre-198-gea8382642 00:00:54.129 17:18:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:54.130 17:18:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:54.130 17:18:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:54.130 17:18:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:54.130 17:18:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:54.130 17:18:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.130 ************************************ 00:00:54.130 START TEST ubsan 00:00:54.130 ************************************ 00:00:54.130 17:18:56 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:54.130 using ubsan 00:00:54.130 00:00:54.130 real 0m0.000s 00:00:54.130 user 0m0.000s 00:00:54.130 sys 0m0.000s 00:00:54.130 17:18:56 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:54.130 17:18:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:54.130 ************************************ 00:00:54.130 END TEST ubsan 00:00:54.130 ************************************ 00:00:54.130 17:18:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:54.130 17:18:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:54.130 17:18:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:54.130 17:18:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:54.130 17:18:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:54.130 17:18:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:54.130 17:18:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:54.130 17:18:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:54.130 17:18:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:54.389 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:54.389 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:54.649 Using 'verbs' RDMA provider 00:01:07.448 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:19.765 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:19.765 Creating mk/config.mk...done. 00:01:19.765 Creating mk/cc.flags.mk...done. 00:01:19.765 Type 'make' to build. 00:01:19.765 17:19:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:19.765 17:19:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:19.765 17:19:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:19.765 17:19:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.765 ************************************ 00:01:19.765 START TEST make 00:01:19.765 ************************************ 00:01:19.765 17:19:21 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:20.024 make[1]: Nothing to be done for 'all'. 00:01:21.416 The Meson build system 00:01:21.416 Version: 1.5.0 00:01:21.416 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:21.416 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:21.416 Build type: native build 00:01:21.416 Project name: libvfio-user 00:01:21.416 Project version: 0.0.1 00:01:21.416 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:21.416 C linker for the host machine: cc ld.bfd 2.40-14 00:01:21.416 Host machine cpu family: x86_64 00:01:21.416 Host machine cpu: x86_64 00:01:21.416 Run-time dependency threads found: YES 00:01:21.416 Library dl found: YES 00:01:21.416 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:21.416 Run-time dependency json-c found: YES 0.17 00:01:21.416 Run-time dependency cmocka found: YES 1.1.7 00:01:21.416 Program pytest-3 found: NO 00:01:21.416 Program flake8 found: NO 00:01:21.416 Program misspell-fixer found: NO 00:01:21.416 Program restructuredtext-lint found: NO 00:01:21.416 Program valgrind found: YES (/usr/bin/valgrind) 00:01:21.416 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:21.416 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:21.416 Compiler for C supports arguments -Wwrite-strings: YES 00:01:21.416 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:21.416 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:21.416 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:21.416 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:21.416 Build targets in project: 8 00:01:21.416 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:21.416 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:21.416 00:01:21.416 libvfio-user 0.0.1 00:01:21.416 00:01:21.416 User defined options 00:01:21.416 buildtype : debug 00:01:21.416 default_library: shared 00:01:21.416 libdir : /usr/local/lib 00:01:21.416 00:01:21.416 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.984 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:21.984 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:21.984 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:21.984 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:21.984 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:21.984 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:21.984 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:21.984 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:21.984 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:21.984 [9/37] Compiling C object samples/null.p/null.c.o 00:01:21.984 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:21.984 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:21.984 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:21.984 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:21.984 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:21.984 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:21.984 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:22.243 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:22.244 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:22.244 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:22.244 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:22.244 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:22.244 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:22.244 [23/37] Compiling C object samples/client.p/client.c.o 00:01:22.244 [24/37] Compiling C object samples/server.p/server.c.o 00:01:22.244 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:22.244 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:22.244 [27/37] Linking target samples/client 00:01:22.244 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:22.244 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:22.244 [30/37] Linking target test/unit_tests 00:01:22.244 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:22.504 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:22.504 [33/37] Linking target samples/gpio-pci-idio-16 00:01:22.504 [34/37] Linking target samples/server 00:01:22.504 [35/37] Linking target samples/null 00:01:22.504 [36/37] Linking target samples/lspci 00:01:22.504 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:22.504 INFO: autodetecting backend as ninja 00:01:22.504 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:22.504 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:22.763 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:22.763 ninja: no work to do. 00:01:28.045 The Meson build system 00:01:28.045 Version: 1.5.0 00:01:28.045 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:28.045 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:28.045 Build type: native build 00:01:28.045 Program cat found: YES (/usr/bin/cat) 00:01:28.045 Project name: DPDK 00:01:28.045 Project version: 24.03.0 00:01:28.045 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:28.045 C linker for the host machine: cc ld.bfd 2.40-14 00:01:28.045 Host machine cpu family: x86_64 00:01:28.045 Host machine cpu: x86_64 00:01:28.045 Message: ## Building in Developer Mode ## 00:01:28.045 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.045 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:28.045 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.045 Program python3 found: YES (/usr/bin/python3) 00:01:28.045 Program cat found: YES (/usr/bin/cat) 00:01:28.045 Compiler for C supports arguments -march=native: YES 00:01:28.045 Checking for size of "void *" : 8 00:01:28.045 Checking for size of "void *" : 8 (cached) 00:01:28.045 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:28.045 Library m found: YES 00:01:28.045 Library numa found: YES 00:01:28.045 Has header "numaif.h" : YES 00:01:28.045 Library fdt found: NO 00:01:28.045 Library execinfo found: NO 00:01:28.045 Has header "execinfo.h" : YES 00:01:28.045 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:28.045 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.045 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.045 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.045 Run-time dependency openssl found: YES 3.1.1 00:01:28.045 Run-time dependency libpcap found: YES 1.10.4 00:01:28.045 Has header "pcap.h" with dependency libpcap: YES 00:01:28.045 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.045 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.045 Compiler for C supports arguments -Wformat: YES 00:01:28.045 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.045 Compiler for C supports arguments -Wformat-security: NO 00:01:28.045 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.046 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.046 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.046 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.046 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.046 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.046 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.046 Compiler for C supports arguments -Wundef: YES 00:01:28.046 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.046 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.046 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.046 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.046 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.046 Program objdump found: YES (/usr/bin/objdump) 00:01:28.046 Compiler for C supports arguments -mavx512f: YES 00:01:28.046 Checking if "AVX512 checking" compiles: YES 00:01:28.046 Fetching value of define "__SSE4_2__" : 1 00:01:28.046 Fetching value of define "__AES__" : 1 00:01:28.046 Fetching value of define "__AVX__" : 1 00:01:28.046 Fetching value of define "__AVX2__" : 1 00:01:28.046 Fetching value of define "__AVX512BW__" : 1 00:01:28.046 Fetching value of define "__AVX512CD__" : 1 00:01:28.046 Fetching value of define "__AVX512DQ__" : 1 00:01:28.046 Fetching value of define "__AVX512F__" : 1 00:01:28.046 Fetching value of define "__AVX512VL__" : 1 00:01:28.046 Fetching value of define "__PCLMUL__" : 1 00:01:28.046 Fetching value of define "__RDRND__" : 1 00:01:28.046 Fetching value of define "__RDSEED__" : 1 00:01:28.046 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.046 Fetching value of define "__znver1__" : (undefined) 00:01:28.046 Fetching value of define "__znver2__" : (undefined) 00:01:28.046 Fetching value of define "__znver3__" : (undefined) 00:01:28.046 Fetching value of define "__znver4__" : (undefined) 00:01:28.046 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.046 Message: lib/log: Defining dependency "log" 00:01:28.046 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.046 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.046 Checking for function "getentropy" : NO 00:01:28.046 Message: lib/eal: Defining dependency "eal" 00:01:28.046 Message: lib/ring: Defining dependency "ring" 00:01:28.046 Message: lib/rcu: Defining dependency "rcu" 00:01:28.046 Message: lib/mempool: Defining dependency "mempool" 00:01:28.046 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.046 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.046 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.046 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.046 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.046 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:28.046 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:28.046 Compiler for C supports arguments -mpclmul: YES 00:01:28.046 Compiler for C supports arguments -maes: YES 00:01:28.046 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.046 Compiler for C supports arguments -mavx512bw: YES 00:01:28.046 Compiler for C supports arguments -mavx512dq: YES 00:01:28.046 Compiler for C supports arguments -mavx512vl: YES 00:01:28.046 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.046 Compiler for C supports arguments -mavx2: YES 00:01:28.046 Compiler for C supports arguments -mavx: YES 00:01:28.046 Message: lib/net: Defining dependency "net" 00:01:28.046 Message: lib/meter: Defining dependency "meter" 00:01:28.046 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.046 Message: lib/pci: Defining dependency "pci" 00:01:28.046 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.046 Message: lib/hash: Defining dependency "hash" 00:01:28.046 Message: lib/timer: Defining dependency "timer" 00:01:28.046 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.046 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.046 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.046 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.046 Message: lib/power: Defining dependency "power" 00:01:28.046 Message: lib/reorder: Defining dependency "reorder" 00:01:28.046 Message: lib/security: Defining dependency "security" 00:01:28.046 Has header "linux/userfaultfd.h" : YES 00:01:28.046 Has header "linux/vduse.h" : YES 00:01:28.046 Message: lib/vhost: Defining dependency "vhost" 00:01:28.046 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:28.046 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:28.046 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:28.046 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:28.046 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:28.046 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:28.046 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:28.046 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:28.046 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:28.046 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:28.046 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:28.046 Configuring doxy-api-html.conf using configuration 00:01:28.046 Configuring doxy-api-man.conf using configuration 00:01:28.046 Program mandb found: YES (/usr/bin/mandb) 00:01:28.046 Program sphinx-build found: NO 00:01:28.046 Configuring rte_build_config.h using configuration 00:01:28.046 Message: 00:01:28.046 ================= 00:01:28.046 Applications Enabled 00:01:28.046 ================= 00:01:28.046 00:01:28.046 apps: 00:01:28.046 00:01:28.046 00:01:28.046 Message: 00:01:28.046 ================= 00:01:28.046 Libraries Enabled 00:01:28.046 ================= 00:01:28.046 00:01:28.046 libs: 00:01:28.046 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:28.046 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:28.046 cryptodev, dmadev, power, reorder, security, vhost, 00:01:28.046 00:01:28.046 Message: 00:01:28.046 =============== 00:01:28.046 Drivers Enabled 00:01:28.046 =============== 00:01:28.046 00:01:28.046 common: 00:01:28.046 00:01:28.046 bus: 00:01:28.046 pci, vdev, 00:01:28.046 mempool: 00:01:28.046 ring, 00:01:28.046 dma: 00:01:28.046 00:01:28.046 net: 00:01:28.046 00:01:28.046 crypto: 00:01:28.046 00:01:28.046 compress: 00:01:28.046 00:01:28.046 vdpa: 00:01:28.046 00:01:28.046 00:01:28.046 Message: 00:01:28.046 ================= 00:01:28.046 Content Skipped 00:01:28.046 ================= 00:01:28.046 00:01:28.046 apps: 00:01:28.046 dumpcap: explicitly disabled via build config 00:01:28.046 graph: explicitly disabled via build config 00:01:28.046 pdump: explicitly disabled via build config 00:01:28.046 proc-info: explicitly disabled via build config 00:01:28.046 test-acl: explicitly disabled via build config 00:01:28.046 test-bbdev: explicitly disabled via build config 00:01:28.046 test-cmdline: explicitly disabled via build config 00:01:28.046 test-compress-perf: explicitly disabled via build config 00:01:28.046 test-crypto-perf: explicitly disabled via build config 00:01:28.046 test-dma-perf: explicitly disabled via build config 00:01:28.046 test-eventdev: explicitly disabled via build config 00:01:28.046 test-fib: explicitly disabled via build config 00:01:28.046 test-flow-perf: explicitly disabled via build config 00:01:28.046 test-gpudev: explicitly disabled via build config 00:01:28.046 test-mldev: explicitly disabled via build config 00:01:28.046 test-pipeline: explicitly disabled via build config 00:01:28.046 test-pmd: explicitly disabled via build config 00:01:28.046 test-regex: explicitly disabled via build config 00:01:28.046 test-sad: explicitly disabled via build config 00:01:28.046 test-security-perf: explicitly disabled via build config 00:01:28.046 00:01:28.046 libs: 00:01:28.046 argparse: explicitly disabled via build config 00:01:28.046 metrics: explicitly disabled via build config 00:01:28.046 acl: explicitly disabled via build config 00:01:28.046 bbdev: explicitly disabled via build config 00:01:28.046 bitratestats: explicitly disabled via build config 00:01:28.046 bpf: explicitly disabled via build config 00:01:28.046 cfgfile: explicitly disabled via build config 00:01:28.046 distributor: explicitly disabled via build config 00:01:28.046 efd: explicitly disabled via build config 00:01:28.046 eventdev: explicitly disabled via build config 00:01:28.046 dispatcher: explicitly disabled via build config 00:01:28.046 gpudev: explicitly disabled via build config 00:01:28.046 gro: explicitly disabled via build config 00:01:28.046 gso: explicitly disabled via build config 00:01:28.046 ip_frag: explicitly disabled via build config 00:01:28.046 jobstats: explicitly disabled via build config 00:01:28.046 latencystats: explicitly disabled via build config 00:01:28.046 lpm: explicitly disabled via build config 00:01:28.046 member: explicitly disabled via build config 00:01:28.046 pcapng: explicitly disabled via build config 00:01:28.046 rawdev: explicitly disabled via build config 00:01:28.046 regexdev: explicitly disabled via build config 00:01:28.046 mldev: explicitly disabled via build config 00:01:28.046 rib: explicitly disabled via build config 00:01:28.046 sched: explicitly disabled via build config 00:01:28.046 stack: explicitly disabled via build config 00:01:28.046 ipsec: explicitly disabled via build config 00:01:28.046 pdcp: explicitly disabled via build config 00:01:28.046 fib: explicitly disabled via build config 00:01:28.046 port: explicitly disabled via build config 00:01:28.046 pdump: explicitly disabled via build config 00:01:28.046 table: explicitly disabled via build config 00:01:28.046 pipeline: explicitly disabled via build config 00:01:28.046 graph: explicitly disabled via build config 00:01:28.046 node: explicitly disabled via build config 00:01:28.046 00:01:28.046 drivers: 00:01:28.046 common/cpt: not in enabled drivers build config 00:01:28.046 common/dpaax: not in enabled drivers build config 00:01:28.046 common/iavf: not in enabled drivers build config 00:01:28.047 common/idpf: not in enabled drivers build config 00:01:28.047 common/ionic: not in enabled drivers build config 00:01:28.047 common/mvep: not in enabled drivers build config 00:01:28.047 common/octeontx: not in enabled drivers build config 00:01:28.047 bus/auxiliary: not in enabled drivers build config 00:01:28.047 bus/cdx: not in enabled drivers build config 00:01:28.047 bus/dpaa: not in enabled drivers build config 00:01:28.047 bus/fslmc: not in enabled drivers build config 00:01:28.047 bus/ifpga: not in enabled drivers build config 00:01:28.047 bus/platform: not in enabled drivers build config 00:01:28.047 bus/uacce: not in enabled drivers build config 00:01:28.047 bus/vmbus: not in enabled drivers build config 00:01:28.047 common/cnxk: not in enabled drivers build config 00:01:28.047 common/mlx5: not in enabled drivers build config 00:01:28.047 common/nfp: not in enabled drivers build config 00:01:28.047 common/nitrox: not in enabled drivers build config 00:01:28.047 common/qat: not in enabled drivers build config 00:01:28.047 common/sfc_efx: not in enabled drivers build config 00:01:28.047 mempool/bucket: not in enabled drivers build config 00:01:28.047 mempool/cnxk: not in enabled drivers build config 00:01:28.047 mempool/dpaa: not in enabled drivers build config 00:01:28.047 mempool/dpaa2: not in enabled drivers build config 00:01:28.047 mempool/octeontx: not in enabled drivers build config 00:01:28.047 mempool/stack: not in enabled drivers build config 00:01:28.047 dma/cnxk: not in enabled drivers build config 00:01:28.047 dma/dpaa: not in enabled drivers build config 00:01:28.047 dma/dpaa2: not in enabled drivers build config 00:01:28.047 dma/hisilicon: not in enabled drivers build config 00:01:28.047 dma/idxd: not in enabled drivers build config 00:01:28.047 dma/ioat: not in enabled drivers build config 00:01:28.047 dma/skeleton: not in enabled drivers build config 00:01:28.047 net/af_packet: not in enabled drivers build config 00:01:28.047 net/af_xdp: not in enabled drivers build config 00:01:28.047 net/ark: not in enabled drivers build config 00:01:28.047 net/atlantic: not in enabled drivers build config 00:01:28.047 net/avp: not in enabled drivers build config 00:01:28.047 net/axgbe: not in enabled drivers build config 00:01:28.047 net/bnx2x: not in enabled drivers build config 00:01:28.047 net/bnxt: not in enabled drivers build config 00:01:28.047 net/bonding: not in enabled drivers build config 00:01:28.047 net/cnxk: not in enabled drivers build config 00:01:28.047 net/cpfl: not in enabled drivers build config 00:01:28.047 net/cxgbe: not in enabled drivers build config 00:01:28.047 net/dpaa: not in enabled drivers build config 00:01:28.047 net/dpaa2: not in enabled drivers build config 00:01:28.047 net/e1000: not in enabled drivers build config 00:01:28.047 net/ena: not in enabled drivers build config 00:01:28.047 net/enetc: not in enabled drivers build config 00:01:28.047 net/enetfec: not in enabled drivers build config 00:01:28.047 net/enic: not in enabled drivers build config 00:01:28.047 net/failsafe: not in enabled drivers build config 00:01:28.047 net/fm10k: not in enabled drivers build config 00:01:28.047 net/gve: not in enabled drivers build config 00:01:28.047 net/hinic: not in enabled drivers build config 00:01:28.047 net/hns3: not in enabled drivers build config 00:01:28.047 net/i40e: not in enabled drivers build config 00:01:28.047 net/iavf: not in enabled drivers build config 00:01:28.047 net/ice: not in enabled drivers build config 00:01:28.047 net/idpf: not in enabled drivers build config 00:01:28.047 net/igc: not in enabled drivers build config 00:01:28.047 net/ionic: not in enabled drivers build config 00:01:28.047 net/ipn3ke: not in enabled drivers build config 00:01:28.047 net/ixgbe: not in enabled drivers build config 00:01:28.047 net/mana: not in enabled drivers build config 00:01:28.047 net/memif: not in enabled drivers build config 00:01:28.047 net/mlx4: not in enabled drivers build config 00:01:28.047 net/mlx5: not in enabled drivers build config 00:01:28.047 net/mvneta: not in enabled drivers build config 00:01:28.047 net/mvpp2: not in enabled drivers build config 00:01:28.047 net/netvsc: not in enabled drivers build config 00:01:28.047 net/nfb: not in enabled drivers build config 00:01:28.047 net/nfp: not in enabled drivers build config 00:01:28.047 net/ngbe: not in enabled drivers build config 00:01:28.047 net/null: not in enabled drivers build config 00:01:28.047 net/octeontx: not in enabled drivers build config 00:01:28.047 net/octeon_ep: not in enabled drivers build config 00:01:28.047 net/pcap: not in enabled drivers build config 00:01:28.047 net/pfe: not in enabled drivers build config 00:01:28.047 net/qede: not in enabled drivers build config 00:01:28.047 net/ring: not in enabled drivers build config 00:01:28.047 net/sfc: not in enabled drivers build config 00:01:28.047 net/softnic: not in enabled drivers build config 00:01:28.047 net/tap: not in enabled drivers build config 00:01:28.047 net/thunderx: not in enabled drivers build config 00:01:28.047 net/txgbe: not in enabled drivers build config 00:01:28.047 net/vdev_netvsc: not in enabled drivers build config 00:01:28.047 net/vhost: not in enabled drivers build config 00:01:28.047 net/virtio: not in enabled drivers build config 00:01:28.047 net/vmxnet3: not in enabled drivers build config 00:01:28.047 raw/*: missing internal dependency, "rawdev" 00:01:28.047 crypto/armv8: not in enabled drivers build config 00:01:28.047 crypto/bcmfs: not in enabled drivers build config 00:01:28.047 crypto/caam_jr: not in enabled drivers build config 00:01:28.047 crypto/ccp: not in enabled drivers build config 00:01:28.047 crypto/cnxk: not in enabled drivers build config 00:01:28.047 crypto/dpaa_sec: not in enabled drivers build config 00:01:28.047 crypto/dpaa2_sec: not in enabled drivers build config 00:01:28.047 crypto/ipsec_mb: not in enabled drivers build config 00:01:28.047 crypto/mlx5: not in enabled drivers build config 00:01:28.047 crypto/mvsam: not in enabled drivers build config 00:01:28.047 crypto/nitrox: not in enabled drivers build config 00:01:28.047 crypto/null: not in enabled drivers build config 00:01:28.047 crypto/octeontx: not in enabled drivers build config 00:01:28.047 crypto/openssl: not in enabled drivers build config 00:01:28.047 crypto/scheduler: not in enabled drivers build config 00:01:28.047 crypto/uadk: not in enabled drivers build config 00:01:28.047 crypto/virtio: not in enabled drivers build config 00:01:28.047 compress/isal: not in enabled drivers build config 00:01:28.047 compress/mlx5: not in enabled drivers build config 00:01:28.047 compress/nitrox: not in enabled drivers build config 00:01:28.047 compress/octeontx: not in enabled drivers build config 00:01:28.047 compress/zlib: not in enabled drivers build config 00:01:28.047 regex/*: missing internal dependency, "regexdev" 00:01:28.047 ml/*: missing internal dependency, "mldev" 00:01:28.047 vdpa/ifc: not in enabled drivers build config 00:01:28.047 vdpa/mlx5: not in enabled drivers build config 00:01:28.047 vdpa/nfp: not in enabled drivers build config 00:01:28.047 vdpa/sfc: not in enabled drivers build config 00:01:28.047 event/*: missing internal dependency, "eventdev" 00:01:28.047 baseband/*: missing internal dependency, "bbdev" 00:01:28.047 gpu/*: missing internal dependency, "gpudev" 00:01:28.047 00:01:28.047 00:01:28.047 Build targets in project: 85 00:01:28.047 00:01:28.047 DPDK 24.03.0 00:01:28.047 00:01:28.047 User defined options 00:01:28.047 buildtype : debug 00:01:28.047 default_library : shared 00:01:28.047 libdir : lib 00:01:28.047 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:28.047 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:28.047 c_link_args : 00:01:28.047 cpu_instruction_set: native 00:01:28.047 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:28.047 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:28.047 enable_docs : false 00:01:28.047 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:28.047 enable_kmods : false 00:01:28.047 max_lcores : 128 00:01:28.047 tests : false 00:01:28.047 00:01:28.047 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.625 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:28.625 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:28.625 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:28.625 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:28.625 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:28.625 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:28.625 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:28.625 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:28.887 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:28.887 [9/268] Linking static target lib/librte_kvargs.a 00:01:28.887 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:28.887 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:28.887 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:28.887 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:28.887 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:28.887 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:28.887 [16/268] Linking static target lib/librte_log.a 00:01:28.887 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:28.887 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:28.887 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:28.887 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:28.887 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:28.887 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:28.887 [23/268] Linking static target lib/librte_pci.a 00:01:29.147 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:29.147 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:29.147 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:29.147 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:29.147 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:29.147 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:29.147 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:29.147 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:29.147 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:29.147 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:29.147 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:29.147 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:29.147 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:29.147 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:29.147 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:29.147 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:29.147 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:29.147 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:29.147 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:29.147 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:29.147 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:29.147 [45/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:29.147 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:29.147 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:29.147 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:29.147 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:29.147 [50/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:29.147 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:29.147 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:29.147 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:29.147 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:29.147 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:29.147 [56/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:29.147 [57/268] Linking static target lib/librte_meter.a 00:01:29.147 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:29.147 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:29.147 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:29.410 [61/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:29.410 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:29.410 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:29.410 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:29.410 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:29.410 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:29.410 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:29.410 [68/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:29.410 [69/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:29.410 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:29.410 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:29.410 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:29.410 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:29.410 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:29.410 [75/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:29.410 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:29.410 [77/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:29.410 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:29.410 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:29.410 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:29.410 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:29.410 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:29.410 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:29.410 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:29.410 [85/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:29.410 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:29.410 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:29.410 [88/268] Linking static target lib/librte_telemetry.a 00:01:29.410 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:29.410 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:29.410 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:29.410 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:29.410 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:29.410 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:29.410 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:29.410 [96/268] Linking static target lib/librte_ring.a 00:01:29.410 [97/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.410 [98/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.410 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:29.410 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:29.410 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:29.410 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:29.410 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:29.410 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:29.410 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:29.410 [106/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:29.410 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:29.410 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:29.410 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:29.410 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:29.410 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:29.410 [112/268] Linking static target lib/librte_net.a 00:01:29.410 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:29.410 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:29.410 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:29.410 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:29.410 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:29.410 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:29.410 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:29.410 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:29.410 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:29.410 [122/268] Linking static target lib/librte_mempool.a 00:01:29.410 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:29.410 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:29.410 [125/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:29.410 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:29.410 [127/268] Linking static target lib/librte_cmdline.a 00:01:29.410 [128/268] Linking static target lib/librte_eal.a 00:01:29.410 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:29.670 [130/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:29.670 [131/268] Linking static target lib/librte_rcu.a 00:01:29.670 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:29.670 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:29.670 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.670 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:29.670 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:29.670 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:29.670 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.670 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:29.670 [140/268] Linking static target lib/librte_mbuf.a 00:01:29.670 [141/268] Linking target lib/librte_log.so.24.1 00:01:29.670 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:29.670 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.670 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.670 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:29.670 [146/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:29.670 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:29.670 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:29.670 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:29.670 [150/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:29.670 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:29.670 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:29.670 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:29.670 [154/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:29.670 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:29.670 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:29.670 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:29.670 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:29.670 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:29.670 [160/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:29.670 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:29.670 [162/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:29.930 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:29.930 [164/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.930 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:29.930 [166/268] Linking static target lib/librte_timer.a 00:01:29.930 [167/268] Linking target lib/librte_kvargs.so.24.1 00:01:29.930 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:29.930 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:29.930 [170/268] Linking target lib/librte_telemetry.so.24.1 00:01:29.930 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:29.930 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:29.930 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:29.930 [174/268] Linking static target lib/librte_power.a 00:01:29.930 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:29.930 [176/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.930 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:29.930 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:29.930 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:29.930 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:29.930 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:29.930 [182/268] Linking static target lib/librte_dmadev.a 00:01:29.930 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:29.930 [184/268] Linking static target lib/librte_compressdev.a 00:01:29.930 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:29.930 [186/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:29.930 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:29.930 [188/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:29.930 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:29.930 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:29.930 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:29.930 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:29.930 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:29.930 [194/268] Linking static target drivers/librte_bus_vdev.a 00:01:29.930 [195/268] Linking static target lib/librte_reorder.a 00:01:29.930 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:29.930 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:29.930 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:29.930 [199/268] Linking static target lib/librte_hash.a 00:01:29.930 [200/268] Linking static target lib/librte_security.a 00:01:29.930 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:30.192 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:30.192 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:30.192 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:30.192 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:30.192 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:30.192 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:30.192 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.192 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.192 [210/268] Linking static target drivers/librte_mempool_ring.a 00:01:30.192 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.192 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.192 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:30.192 [214/268] Linking static target lib/librte_cryptodev.a 00:01:30.453 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.453 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.453 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.453 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:30.453 [219/268] Linking static target lib/librte_ethdev.a 00:01:30.712 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.712 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.712 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.712 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.712 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.712 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:30.972 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.972 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.543 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:31.803 [229/268] Linking static target lib/librte_vhost.a 00:01:32.373 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.753 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.033 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.603 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.603 [234/268] Linking target lib/librte_eal.so.24.1 00:01:39.603 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:39.863 [236/268] Linking target lib/librte_ring.so.24.1 00:01:39.863 [237/268] Linking target lib/librte_timer.so.24.1 00:01:39.864 [238/268] Linking target lib/librte_pci.so.24.1 00:01:39.864 [239/268] Linking target lib/librte_meter.so.24.1 00:01:39.864 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:39.864 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:39.864 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:39.864 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:39.864 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:39.864 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:39.864 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:39.864 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:39.864 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:39.864 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:40.123 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:40.123 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:40.123 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:40.123 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:40.123 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:40.383 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:40.383 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:40.383 [257/268] Linking target lib/librte_net.so.24.1 00:01:40.383 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:40.384 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:40.384 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:40.384 [261/268] Linking target lib/librte_hash.so.24.1 00:01:40.384 [262/268] Linking target lib/librte_security.so.24.1 00:01:40.384 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:40.384 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:40.644 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:40.644 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:40.644 [267/268] Linking target lib/librte_power.so.24.1 00:01:40.644 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:40.644 INFO: autodetecting backend as ninja 00:01:40.644 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:52.864 CC lib/ut/ut.o 00:01:52.864 CC lib/ut_mock/mock.o 00:01:52.864 CC lib/log/log.o 00:01:52.864 CC lib/log/log_flags.o 00:01:52.864 CC lib/log/log_deprecated.o 00:01:52.864 LIB libspdk_ut_mock.a 00:01:52.864 LIB libspdk_ut.a 00:01:52.864 LIB libspdk_log.a 00:01:52.864 SO libspdk_ut_mock.so.6.0 00:01:52.864 SO libspdk_ut.so.2.0 00:01:52.864 SO libspdk_log.so.7.1 00:01:52.864 SYMLINK libspdk_ut_mock.so 00:01:52.864 SYMLINK libspdk_ut.so 00:01:52.864 SYMLINK libspdk_log.so 00:01:52.864 CC lib/util/bit_array.o 00:01:52.864 CC lib/util/base64.o 00:01:52.864 CC lib/util/cpuset.o 00:01:52.864 CC lib/util/crc16.o 00:01:52.864 CC lib/dma/dma.o 00:01:52.864 CC lib/ioat/ioat.o 00:01:52.864 CC lib/util/crc32.o 00:01:52.864 CXX lib/trace_parser/trace.o 00:01:52.864 CC lib/util/crc32c.o 00:01:52.864 CC lib/util/crc32_ieee.o 00:01:52.864 CC lib/util/crc64.o 00:01:52.864 CC lib/util/dif.o 00:01:52.864 CC lib/util/fd.o 00:01:52.864 CC lib/util/fd_group.o 00:01:52.864 CC lib/util/file.o 00:01:52.864 CC lib/util/hexlify.o 00:01:52.864 CC lib/util/iov.o 00:01:52.864 CC lib/util/math.o 00:01:52.864 CC lib/util/net.o 00:01:52.864 CC lib/util/pipe.o 00:01:52.864 CC lib/util/strerror_tls.o 00:01:52.864 CC lib/util/string.o 00:01:52.864 CC lib/util/uuid.o 00:01:52.864 CC lib/util/xor.o 00:01:52.864 CC lib/util/zipf.o 00:01:52.864 CC lib/util/md5.o 00:01:52.864 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.864 CC lib/vfio_user/host/vfio_user.o 00:01:52.864 LIB libspdk_dma.a 00:01:52.864 SO libspdk_dma.so.5.0 00:01:52.864 LIB libspdk_ioat.a 00:01:52.864 SO libspdk_ioat.so.7.0 00:01:52.864 SYMLINK libspdk_dma.so 00:01:52.864 SYMLINK libspdk_ioat.so 00:01:52.864 LIB libspdk_vfio_user.a 00:01:52.864 SO libspdk_vfio_user.so.5.0 00:01:52.864 LIB libspdk_util.a 00:01:52.864 SYMLINK libspdk_vfio_user.so 00:01:52.864 SO libspdk_util.so.10.1 00:01:52.864 SYMLINK libspdk_util.so 00:01:52.864 LIB libspdk_trace_parser.a 00:01:52.864 SO libspdk_trace_parser.so.6.0 00:01:52.864 SYMLINK libspdk_trace_parser.so 00:01:52.864 CC lib/rdma_utils/rdma_utils.o 00:01:52.864 CC lib/json/json_parse.o 00:01:52.864 CC lib/conf/conf.o 00:01:52.864 CC lib/json/json_util.o 00:01:52.864 CC lib/json/json_write.o 00:01:52.864 CC lib/idxd/idxd_user.o 00:01:52.864 CC lib/idxd/idxd.o 00:01:52.864 CC lib/env_dpdk/env.o 00:01:52.864 CC lib/idxd/idxd_kernel.o 00:01:52.864 CC lib/vmd/vmd.o 00:01:52.864 CC lib/env_dpdk/memory.o 00:01:52.864 CC lib/vmd/led.o 00:01:52.864 CC lib/env_dpdk/pci.o 00:01:52.864 CC lib/env_dpdk/init.o 00:01:52.864 CC lib/env_dpdk/threads.o 00:01:52.864 CC lib/env_dpdk/pci_ioat.o 00:01:52.864 CC lib/env_dpdk/pci_virtio.o 00:01:52.864 CC lib/env_dpdk/pci_vmd.o 00:01:52.864 CC lib/env_dpdk/pci_idxd.o 00:01:52.864 CC lib/env_dpdk/pci_event.o 00:01:52.864 CC lib/env_dpdk/sigbus_handler.o 00:01:52.864 CC lib/env_dpdk/pci_dpdk.o 00:01:52.864 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.864 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:52.864 LIB libspdk_conf.a 00:01:52.864 LIB libspdk_rdma_utils.a 00:01:52.864 SO libspdk_conf.so.6.0 00:01:52.864 SO libspdk_rdma_utils.so.1.0 00:01:52.864 LIB libspdk_json.a 00:01:52.864 SO libspdk_json.so.6.0 00:01:52.864 SYMLINK libspdk_conf.so 00:01:52.864 SYMLINK libspdk_rdma_utils.so 00:01:53.123 SYMLINK libspdk_json.so 00:01:53.123 LIB libspdk_idxd.a 00:01:53.123 SO libspdk_idxd.so.12.1 00:01:53.123 LIB libspdk_vmd.a 00:01:53.123 SO libspdk_vmd.so.6.0 00:01:53.123 SYMLINK libspdk_idxd.so 00:01:53.382 SYMLINK libspdk_vmd.so 00:01:53.382 CC lib/rdma_provider/common.o 00:01:53.382 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:53.382 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.382 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.382 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.382 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.382 LIB libspdk_rdma_provider.a 00:01:53.641 SO libspdk_rdma_provider.so.7.0 00:01:53.641 LIB libspdk_jsonrpc.a 00:01:53.641 SO libspdk_jsonrpc.so.6.0 00:01:53.641 SYMLINK libspdk_rdma_provider.so 00:01:53.641 SYMLINK libspdk_jsonrpc.so 00:01:53.641 LIB libspdk_env_dpdk.a 00:01:53.641 SO libspdk_env_dpdk.so.15.1 00:01:53.900 SYMLINK libspdk_env_dpdk.so 00:01:53.900 CC lib/rpc/rpc.o 00:01:54.159 LIB libspdk_rpc.a 00:01:54.159 SO libspdk_rpc.so.6.0 00:01:54.159 SYMLINK libspdk_rpc.so 00:01:54.419 CC lib/notify/notify.o 00:01:54.419 CC lib/notify/notify_rpc.o 00:01:54.678 CC lib/trace/trace.o 00:01:54.678 CC lib/keyring/keyring.o 00:01:54.678 CC lib/trace/trace_flags.o 00:01:54.678 CC lib/keyring/keyring_rpc.o 00:01:54.678 CC lib/trace/trace_rpc.o 00:01:54.678 LIB libspdk_notify.a 00:01:54.678 SO libspdk_notify.so.6.0 00:01:54.678 LIB libspdk_keyring.a 00:01:54.678 LIB libspdk_trace.a 00:01:54.678 SO libspdk_keyring.so.2.0 00:01:54.938 SYMLINK libspdk_notify.so 00:01:54.938 SO libspdk_trace.so.11.0 00:01:54.938 SYMLINK libspdk_keyring.so 00:01:54.938 SYMLINK libspdk_trace.so 00:01:55.197 CC lib/thread/thread.o 00:01:55.197 CC lib/thread/iobuf.o 00:01:55.197 CC lib/sock/sock.o 00:01:55.197 CC lib/sock/sock_rpc.o 00:01:55.456 LIB libspdk_sock.a 00:01:55.456 SO libspdk_sock.so.10.0 00:01:55.715 SYMLINK libspdk_sock.so 00:01:55.975 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.975 CC lib/nvme/nvme_ctrlr.o 00:01:55.975 CC lib/nvme/nvme_fabric.o 00:01:55.975 CC lib/nvme/nvme_ns_cmd.o 00:01:55.975 CC lib/nvme/nvme_ns.o 00:01:55.975 CC lib/nvme/nvme_pcie_common.o 00:01:55.975 CC lib/nvme/nvme_pcie.o 00:01:55.975 CC lib/nvme/nvme_qpair.o 00:01:55.975 CC lib/nvme/nvme.o 00:01:55.975 CC lib/nvme/nvme_quirks.o 00:01:55.975 CC lib/nvme/nvme_transport.o 00:01:55.975 CC lib/nvme/nvme_discovery.o 00:01:55.975 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.975 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.975 CC lib/nvme/nvme_tcp.o 00:01:55.975 CC lib/nvme/nvme_opal.o 00:01:55.975 CC lib/nvme/nvme_io_msg.o 00:01:55.975 CC lib/nvme/nvme_poll_group.o 00:01:55.975 CC lib/nvme/nvme_zns.o 00:01:55.975 CC lib/nvme/nvme_stubs.o 00:01:55.975 CC lib/nvme/nvme_auth.o 00:01:55.975 CC lib/nvme/nvme_cuse.o 00:01:55.975 CC lib/nvme/nvme_vfio_user.o 00:01:55.975 CC lib/nvme/nvme_rdma.o 00:01:56.235 LIB libspdk_thread.a 00:01:56.235 SO libspdk_thread.so.11.0 00:01:56.494 SYMLINK libspdk_thread.so 00:01:56.753 CC lib/blob/blobstore.o 00:01:56.753 CC lib/init/json_config.o 00:01:56.753 CC lib/blob/request.o 00:01:56.753 CC lib/init/subsystem.o 00:01:56.753 CC lib/blob/blob_bs_dev.o 00:01:56.753 CC lib/blob/zeroes.o 00:01:56.753 CC lib/init/subsystem_rpc.o 00:01:56.753 CC lib/fsdev/fsdev.o 00:01:56.753 CC lib/init/rpc.o 00:01:56.753 CC lib/fsdev/fsdev_rpc.o 00:01:56.753 CC lib/fsdev/fsdev_io.o 00:01:56.753 CC lib/vfu_tgt/tgt_endpoint.o 00:01:56.753 CC lib/vfu_tgt/tgt_rpc.o 00:01:56.753 CC lib/virtio/virtio.o 00:01:56.753 CC lib/virtio/virtio_vhost_user.o 00:01:56.753 CC lib/virtio/virtio_vfio_user.o 00:01:56.753 CC lib/virtio/virtio_pci.o 00:01:56.753 CC lib/accel/accel.o 00:01:56.753 CC lib/accel/accel_rpc.o 00:01:56.753 CC lib/accel/accel_sw.o 00:01:57.012 LIB libspdk_init.a 00:01:57.012 SO libspdk_init.so.6.0 00:01:57.012 LIB libspdk_virtio.a 00:01:57.012 LIB libspdk_vfu_tgt.a 00:01:57.012 SO libspdk_vfu_tgt.so.3.0 00:01:57.012 SO libspdk_virtio.so.7.0 00:01:57.012 SYMLINK libspdk_init.so 00:01:57.012 SYMLINK libspdk_vfu_tgt.so 00:01:57.012 SYMLINK libspdk_virtio.so 00:01:57.270 LIB libspdk_fsdev.a 00:01:57.270 SO libspdk_fsdev.so.2.0 00:01:57.270 SYMLINK libspdk_fsdev.so 00:01:57.270 CC lib/event/app.o 00:01:57.270 CC lib/event/reactor.o 00:01:57.270 CC lib/event/log_rpc.o 00:01:57.270 CC lib/event/app_rpc.o 00:01:57.270 CC lib/event/scheduler_static.o 00:01:57.529 LIB libspdk_accel.a 00:01:57.529 SO libspdk_accel.so.16.0 00:01:57.529 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:57.529 LIB libspdk_nvme.a 00:01:57.529 SYMLINK libspdk_accel.so 00:01:57.789 LIB libspdk_event.a 00:01:57.789 SO libspdk_nvme.so.15.0 00:01:57.789 SO libspdk_event.so.14.0 00:01:57.789 SYMLINK libspdk_event.so 00:01:57.789 SYMLINK libspdk_nvme.so 00:01:57.789 CC lib/bdev/bdev.o 00:01:57.789 CC lib/bdev/bdev_rpc.o 00:01:57.789 CC lib/bdev/bdev_zone.o 00:01:57.789 CC lib/bdev/part.o 00:01:57.789 CC lib/bdev/scsi_nvme.o 00:01:58.048 LIB libspdk_fuse_dispatcher.a 00:01:58.048 SO libspdk_fuse_dispatcher.so.1.0 00:01:58.048 SYMLINK libspdk_fuse_dispatcher.so 00:01:58.985 LIB libspdk_blob.a 00:01:58.985 SO libspdk_blob.so.11.0 00:01:58.985 SYMLINK libspdk_blob.so 00:01:59.244 CC lib/blobfs/blobfs.o 00:01:59.245 CC lib/blobfs/tree.o 00:01:59.245 CC lib/lvol/lvol.o 00:01:59.814 LIB libspdk_bdev.a 00:01:59.814 SO libspdk_bdev.so.17.0 00:01:59.814 SYMLINK libspdk_bdev.so 00:01:59.814 LIB libspdk_blobfs.a 00:01:59.814 SO libspdk_blobfs.so.10.0 00:01:59.814 LIB libspdk_lvol.a 00:02:00.075 SYMLINK libspdk_blobfs.so 00:02:00.075 SO libspdk_lvol.so.10.0 00:02:00.075 SYMLINK libspdk_lvol.so 00:02:00.075 CC lib/ublk/ublk.o 00:02:00.075 CC lib/nvmf/ctrlr.o 00:02:00.075 CC lib/ublk/ublk_rpc.o 00:02:00.075 CC lib/nvmf/ctrlr_discovery.o 00:02:00.075 CC lib/nvmf/ctrlr_bdev.o 00:02:00.075 CC lib/nvmf/subsystem.o 00:02:00.075 CC lib/nvmf/nvmf.o 00:02:00.075 CC lib/nvmf/nvmf_rpc.o 00:02:00.075 CC lib/nvmf/transport.o 00:02:00.075 CC lib/nvmf/stubs.o 00:02:00.075 CC lib/nvmf/tcp.o 00:02:00.075 CC lib/nvmf/mdns_server.o 00:02:00.075 CC lib/nvmf/vfio_user.o 00:02:00.075 CC lib/nbd/nbd.o 00:02:00.075 CC lib/scsi/dev.o 00:02:00.075 CC lib/nvmf/rdma.o 00:02:00.075 CC lib/nbd/nbd_rpc.o 00:02:00.075 CC lib/ftl/ftl_core.o 00:02:00.075 CC lib/scsi/lun.o 00:02:00.075 CC lib/nvmf/auth.o 00:02:00.075 CC lib/ftl/ftl_init.o 00:02:00.075 CC lib/scsi/port.o 00:02:00.075 CC lib/ftl/ftl_layout.o 00:02:00.075 CC lib/scsi/scsi.o 00:02:00.075 CC lib/scsi/scsi_bdev.o 00:02:00.075 CC lib/ftl/ftl_debug.o 00:02:00.075 CC lib/ftl/ftl_io.o 00:02:00.075 CC lib/ftl/ftl_sb.o 00:02:00.075 CC lib/scsi/scsi_pr.o 00:02:00.075 CC lib/scsi/scsi_rpc.o 00:02:00.075 CC lib/scsi/task.o 00:02:00.075 CC lib/ftl/ftl_l2p.o 00:02:00.075 CC lib/ftl/ftl_l2p_flat.o 00:02:00.075 CC lib/ftl/ftl_nv_cache.o 00:02:00.075 CC lib/ftl/ftl_band.o 00:02:00.075 CC lib/ftl/ftl_band_ops.o 00:02:00.075 CC lib/ftl/ftl_writer.o 00:02:00.075 CC lib/ftl/ftl_rq.o 00:02:00.075 CC lib/ftl/ftl_reloc.o 00:02:00.075 CC lib/ftl/ftl_p2l.o 00:02:00.075 CC lib/ftl/ftl_l2p_cache.o 00:02:00.075 CC lib/ftl/ftl_p2l_log.o 00:02:00.075 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.075 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.075 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.075 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.335 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.335 CC lib/ftl/utils/ftl_md.o 00:02:00.335 CC lib/ftl/utils/ftl_conf.o 00:02:00.335 CC lib/ftl/utils/ftl_mempool.o 00:02:00.335 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.335 CC lib/ftl/utils/ftl_property.o 00:02:00.335 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.335 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.335 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.335 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.335 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.335 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:00.335 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.335 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.335 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.335 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:00.335 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.335 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.335 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:00.335 CC lib/ftl/base/ftl_base_dev.o 00:02:00.335 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.335 CC lib/ftl/ftl_trace.o 00:02:00.902 LIB libspdk_scsi.a 00:02:00.902 SO libspdk_scsi.so.9.0 00:02:00.902 LIB libspdk_nbd.a 00:02:00.902 SYMLINK libspdk_scsi.so 00:02:00.902 SO libspdk_nbd.so.7.0 00:02:00.902 SYMLINK libspdk_nbd.so 00:02:00.902 LIB libspdk_ublk.a 00:02:01.160 SO libspdk_ublk.so.3.0 00:02:01.160 SYMLINK libspdk_ublk.so 00:02:01.160 CC lib/vhost/vhost.o 00:02:01.161 CC lib/vhost/vhost_rpc.o 00:02:01.161 CC lib/vhost/vhost_scsi.o 00:02:01.161 CC lib/vhost/vhost_blk.o 00:02:01.161 CC lib/vhost/rte_vhost_user.o 00:02:01.161 LIB libspdk_ftl.a 00:02:01.161 CC lib/iscsi/conn.o 00:02:01.161 CC lib/iscsi/init_grp.o 00:02:01.161 CC lib/iscsi/iscsi.o 00:02:01.161 CC lib/iscsi/param.o 00:02:01.161 CC lib/iscsi/portal_grp.o 00:02:01.161 CC lib/iscsi/tgt_node.o 00:02:01.161 CC lib/iscsi/iscsi_subsystem.o 00:02:01.161 CC lib/iscsi/iscsi_rpc.o 00:02:01.161 CC lib/iscsi/task.o 00:02:01.420 SO libspdk_ftl.so.9.0 00:02:01.420 SYMLINK libspdk_ftl.so 00:02:01.988 LIB libspdk_vhost.a 00:02:01.988 SO libspdk_vhost.so.8.0 00:02:01.988 LIB libspdk_nvmf.a 00:02:01.988 SO libspdk_nvmf.so.20.0 00:02:01.988 SYMLINK libspdk_vhost.so 00:02:02.246 LIB libspdk_iscsi.a 00:02:02.246 SO libspdk_iscsi.so.8.0 00:02:02.246 SYMLINK libspdk_nvmf.so 00:02:02.246 SYMLINK libspdk_iscsi.so 00:02:02.816 CC module/env_dpdk/env_dpdk_rpc.o 00:02:02.816 CC module/vfu_device/vfu_virtio_scsi.o 00:02:02.816 CC module/vfu_device/vfu_virtio.o 00:02:02.816 CC module/vfu_device/vfu_virtio_blk.o 00:02:02.816 CC module/vfu_device/vfu_virtio_rpc.o 00:02:02.816 CC module/vfu_device/vfu_virtio_fs.o 00:02:03.076 LIB libspdk_env_dpdk_rpc.a 00:02:03.076 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:03.076 CC module/keyring/file/keyring.o 00:02:03.076 CC module/keyring/file/keyring_rpc.o 00:02:03.076 CC module/fsdev/aio/fsdev_aio.o 00:02:03.076 CC module/fsdev/aio/linux_aio_mgr.o 00:02:03.076 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:03.076 CC module/blob/bdev/blob_bdev.o 00:02:03.076 CC module/keyring/linux/keyring.o 00:02:03.076 CC module/accel/error/accel_error.o 00:02:03.076 CC module/accel/error/accel_error_rpc.o 00:02:03.076 CC module/keyring/linux/keyring_rpc.o 00:02:03.076 CC module/scheduler/gscheduler/gscheduler.o 00:02:03.076 CC module/sock/posix/posix.o 00:02:03.076 CC module/accel/dsa/accel_dsa.o 00:02:03.076 CC module/accel/ioat/accel_ioat.o 00:02:03.076 CC module/accel/iaa/accel_iaa.o 00:02:03.076 CC module/accel/dsa/accel_dsa_rpc.o 00:02:03.076 CC module/accel/ioat/accel_ioat_rpc.o 00:02:03.076 CC module/accel/iaa/accel_iaa_rpc.o 00:02:03.076 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:03.076 SO libspdk_env_dpdk_rpc.so.6.0 00:02:03.076 SYMLINK libspdk_env_dpdk_rpc.so 00:02:03.076 LIB libspdk_keyring_file.a 00:02:03.076 LIB libspdk_scheduler_gscheduler.a 00:02:03.076 LIB libspdk_keyring_linux.a 00:02:03.076 SO libspdk_keyring_file.so.2.0 00:02:03.076 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.076 SO libspdk_scheduler_gscheduler.so.4.0 00:02:03.076 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:03.076 LIB libspdk_accel_ioat.a 00:02:03.076 SO libspdk_keyring_linux.so.1.0 00:02:03.076 LIB libspdk_scheduler_dynamic.a 00:02:03.335 LIB libspdk_accel_iaa.a 00:02:03.335 LIB libspdk_accel_error.a 00:02:03.335 SO libspdk_accel_ioat.so.6.0 00:02:03.335 SYMLINK libspdk_keyring_file.so 00:02:03.335 SO libspdk_scheduler_dynamic.so.4.0 00:02:03.335 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:03.335 LIB libspdk_blob_bdev.a 00:02:03.335 SYMLINK libspdk_scheduler_gscheduler.so 00:02:03.335 SYMLINK libspdk_keyring_linux.so 00:02:03.335 SO libspdk_accel_iaa.so.3.0 00:02:03.335 SO libspdk_accel_error.so.2.0 00:02:03.335 SO libspdk_blob_bdev.so.11.0 00:02:03.335 LIB libspdk_accel_dsa.a 00:02:03.335 SYMLINK libspdk_accel_ioat.so 00:02:03.335 SYMLINK libspdk_scheduler_dynamic.so 00:02:03.335 SYMLINK libspdk_accel_iaa.so 00:02:03.335 SYMLINK libspdk_accel_error.so 00:02:03.335 SO libspdk_accel_dsa.so.5.0 00:02:03.335 SYMLINK libspdk_blob_bdev.so 00:02:03.335 LIB libspdk_vfu_device.a 00:02:03.335 SYMLINK libspdk_accel_dsa.so 00:02:03.335 SO libspdk_vfu_device.so.3.0 00:02:03.595 SYMLINK libspdk_vfu_device.so 00:02:03.595 LIB libspdk_fsdev_aio.a 00:02:03.595 SO libspdk_fsdev_aio.so.1.0 00:02:03.595 LIB libspdk_sock_posix.a 00:02:03.595 SO libspdk_sock_posix.so.6.0 00:02:03.595 SYMLINK libspdk_fsdev_aio.so 00:02:03.595 SYMLINK libspdk_sock_posix.so 00:02:03.854 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.854 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.854 CC module/bdev/aio/bdev_aio.o 00:02:03.854 CC module/bdev/ftl/bdev_ftl.o 00:02:03.854 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.854 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.854 CC module/bdev/delay/vbdev_delay.o 00:02:03.854 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.854 CC module/bdev/error/vbdev_error.o 00:02:03.854 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.854 CC module/bdev/raid/bdev_raid.o 00:02:03.854 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.854 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.854 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.854 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.854 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.854 CC module/bdev/raid/raid0.o 00:02:03.854 CC module/bdev/split/vbdev_split.o 00:02:03.854 CC module/bdev/raid/raid1.o 00:02:03.854 CC module/bdev/null/bdev_null.o 00:02:03.854 CC module/bdev/raid/concat.o 00:02:03.854 CC module/bdev/null/bdev_null_rpc.o 00:02:03.854 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.854 CC module/bdev/nvme/bdev_nvme.o 00:02:03.854 CC module/bdev/nvme/nvme_rpc.o 00:02:03.854 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.854 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.854 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.854 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.854 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.854 CC module/bdev/malloc/bdev_malloc.o 00:02:03.854 CC module/bdev/nvme/vbdev_opal.o 00:02:03.854 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.854 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.854 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.854 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.854 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.854 CC module/bdev/gpt/gpt.o 00:02:03.854 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.854 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.854 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.854 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:04.112 LIB libspdk_blobfs_bdev.a 00:02:04.112 LIB libspdk_bdev_split.a 00:02:04.112 LIB libspdk_bdev_error.a 00:02:04.112 SO libspdk_blobfs_bdev.so.6.0 00:02:04.112 LIB libspdk_bdev_ftl.a 00:02:04.112 LIB libspdk_bdev_gpt.a 00:02:04.112 SO libspdk_bdev_split.so.6.0 00:02:04.112 SO libspdk_bdev_error.so.6.0 00:02:04.112 LIB libspdk_bdev_null.a 00:02:04.112 LIB libspdk_bdev_passthru.a 00:02:04.112 SO libspdk_bdev_ftl.so.6.0 00:02:04.112 LIB libspdk_bdev_aio.a 00:02:04.112 SO libspdk_bdev_gpt.so.6.0 00:02:04.112 SO libspdk_bdev_null.so.6.0 00:02:04.112 SYMLINK libspdk_blobfs_bdev.so 00:02:04.112 SYMLINK libspdk_bdev_split.so 00:02:04.112 SO libspdk_bdev_passthru.so.6.0 00:02:04.112 SYMLINK libspdk_bdev_error.so 00:02:04.112 SO libspdk_bdev_aio.so.6.0 00:02:04.112 LIB libspdk_bdev_iscsi.a 00:02:04.112 SYMLINK libspdk_bdev_gpt.so 00:02:04.112 SYMLINK libspdk_bdev_ftl.so 00:02:04.112 LIB libspdk_bdev_zone_block.a 00:02:04.112 LIB libspdk_bdev_delay.a 00:02:04.112 SYMLINK libspdk_bdev_null.so 00:02:04.112 LIB libspdk_bdev_malloc.a 00:02:04.112 SO libspdk_bdev_iscsi.so.6.0 00:02:04.112 SYMLINK libspdk_bdev_passthru.so 00:02:04.112 SO libspdk_bdev_zone_block.so.6.0 00:02:04.112 SO libspdk_bdev_delay.so.6.0 00:02:04.112 SO libspdk_bdev_malloc.so.6.0 00:02:04.371 SYMLINK libspdk_bdev_aio.so 00:02:04.371 SYMLINK libspdk_bdev_iscsi.so 00:02:04.371 LIB libspdk_bdev_virtio.a 00:02:04.371 SYMLINK libspdk_bdev_malloc.so 00:02:04.371 SYMLINK libspdk_bdev_delay.so 00:02:04.371 SYMLINK libspdk_bdev_zone_block.so 00:02:04.371 SO libspdk_bdev_virtio.so.6.0 00:02:04.371 LIB libspdk_bdev_lvol.a 00:02:04.371 SO libspdk_bdev_lvol.so.6.0 00:02:04.371 SYMLINK libspdk_bdev_virtio.so 00:02:04.371 SYMLINK libspdk_bdev_lvol.so 00:02:04.631 LIB libspdk_bdev_raid.a 00:02:04.631 SO libspdk_bdev_raid.so.6.0 00:02:04.890 SYMLINK libspdk_bdev_raid.so 00:02:05.827 LIB libspdk_bdev_nvme.a 00:02:05.828 SO libspdk_bdev_nvme.so.7.1 00:02:05.828 SYMLINK libspdk_bdev_nvme.so 00:02:06.396 CC module/event/subsystems/sock/sock.o 00:02:06.396 CC module/event/subsystems/iobuf/iobuf.o 00:02:06.396 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:06.396 CC module/event/subsystems/vmd/vmd.o 00:02:06.396 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:06.396 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:06.396 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:06.396 CC module/event/subsystems/fsdev/fsdev.o 00:02:06.396 CC module/event/subsystems/keyring/keyring.o 00:02:06.396 CC module/event/subsystems/scheduler/scheduler.o 00:02:06.654 LIB libspdk_event_vhost_blk.a 00:02:06.654 LIB libspdk_event_vfu_tgt.a 00:02:06.654 LIB libspdk_event_scheduler.a 00:02:06.654 SO libspdk_event_vhost_blk.so.3.0 00:02:06.654 LIB libspdk_event_sock.a 00:02:06.654 LIB libspdk_event_keyring.a 00:02:06.654 LIB libspdk_event_fsdev.a 00:02:06.654 LIB libspdk_event_vmd.a 00:02:06.654 LIB libspdk_event_iobuf.a 00:02:06.654 SO libspdk_event_scheduler.so.4.0 00:02:06.654 SO libspdk_event_vfu_tgt.so.3.0 00:02:06.654 SO libspdk_event_sock.so.5.0 00:02:06.654 SO libspdk_event_fsdev.so.1.0 00:02:06.654 SO libspdk_event_keyring.so.1.0 00:02:06.654 SO libspdk_event_vmd.so.6.0 00:02:06.654 SYMLINK libspdk_event_vhost_blk.so 00:02:06.654 SO libspdk_event_iobuf.so.3.0 00:02:06.654 SYMLINK libspdk_event_scheduler.so 00:02:06.654 SYMLINK libspdk_event_sock.so 00:02:06.654 SYMLINK libspdk_event_keyring.so 00:02:06.654 SYMLINK libspdk_event_vfu_tgt.so 00:02:06.654 SYMLINK libspdk_event_fsdev.so 00:02:06.654 SYMLINK libspdk_event_vmd.so 00:02:06.654 SYMLINK libspdk_event_iobuf.so 00:02:07.222 CC module/event/subsystems/accel/accel.o 00:02:07.222 LIB libspdk_event_accel.a 00:02:07.222 SO libspdk_event_accel.so.6.0 00:02:07.222 SYMLINK libspdk_event_accel.so 00:02:07.790 CC module/event/subsystems/bdev/bdev.o 00:02:07.790 LIB libspdk_event_bdev.a 00:02:07.790 SO libspdk_event_bdev.so.6.0 00:02:07.790 SYMLINK libspdk_event_bdev.so 00:02:08.050 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:08.050 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:08.050 CC module/event/subsystems/scsi/scsi.o 00:02:08.050 CC module/event/subsystems/nbd/nbd.o 00:02:08.050 CC module/event/subsystems/ublk/ublk.o 00:02:08.312 LIB libspdk_event_nbd.a 00:02:08.312 LIB libspdk_event_ublk.a 00:02:08.312 LIB libspdk_event_scsi.a 00:02:08.312 SO libspdk_event_ublk.so.3.0 00:02:08.312 SO libspdk_event_nbd.so.6.0 00:02:08.312 SO libspdk_event_scsi.so.6.0 00:02:08.312 LIB libspdk_event_nvmf.a 00:02:08.312 SYMLINK libspdk_event_nbd.so 00:02:08.312 SO libspdk_event_nvmf.so.6.0 00:02:08.312 SYMLINK libspdk_event_ublk.so 00:02:08.312 SYMLINK libspdk_event_scsi.so 00:02:08.573 SYMLINK libspdk_event_nvmf.so 00:02:08.832 CC module/event/subsystems/iscsi/iscsi.o 00:02:08.832 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:08.832 LIB libspdk_event_iscsi.a 00:02:08.832 LIB libspdk_event_vhost_scsi.a 00:02:08.832 SO libspdk_event_iscsi.so.6.0 00:02:08.832 SO libspdk_event_vhost_scsi.so.3.0 00:02:09.092 SYMLINK libspdk_event_iscsi.so 00:02:09.092 SYMLINK libspdk_event_vhost_scsi.so 00:02:09.092 SO libspdk.so.6.0 00:02:09.092 SYMLINK libspdk.so 00:02:09.667 CC app/trace_record/trace_record.o 00:02:09.667 CC app/spdk_nvme_identify/identify.o 00:02:09.667 CC app/spdk_nvme_perf/perf.o 00:02:09.667 CC test/rpc_client/rpc_client_test.o 00:02:09.667 CC app/spdk_lspci/spdk_lspci.o 00:02:09.667 CXX app/trace/trace.o 00:02:09.667 CC app/spdk_nvme_discover/discovery_aer.o 00:02:09.667 TEST_HEADER include/spdk/accel.h 00:02:09.667 TEST_HEADER include/spdk/accel_module.h 00:02:09.667 TEST_HEADER include/spdk/barrier.h 00:02:09.667 TEST_HEADER include/spdk/assert.h 00:02:09.667 TEST_HEADER include/spdk/base64.h 00:02:09.667 TEST_HEADER include/spdk/bdev.h 00:02:09.667 TEST_HEADER include/spdk/bdev_module.h 00:02:09.667 TEST_HEADER include/spdk/bdev_zone.h 00:02:09.667 CC app/spdk_top/spdk_top.o 00:02:09.667 TEST_HEADER include/spdk/bit_array.h 00:02:09.667 TEST_HEADER include/spdk/bit_pool.h 00:02:09.667 TEST_HEADER include/spdk/blob_bdev.h 00:02:09.667 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:09.667 TEST_HEADER include/spdk/blob.h 00:02:09.667 TEST_HEADER include/spdk/blobfs.h 00:02:09.667 TEST_HEADER include/spdk/conf.h 00:02:09.667 TEST_HEADER include/spdk/config.h 00:02:09.667 TEST_HEADER include/spdk/crc16.h 00:02:09.667 TEST_HEADER include/spdk/cpuset.h 00:02:09.667 TEST_HEADER include/spdk/crc32.h 00:02:09.667 TEST_HEADER include/spdk/dma.h 00:02:09.667 TEST_HEADER include/spdk/dif.h 00:02:09.667 TEST_HEADER include/spdk/crc64.h 00:02:09.667 TEST_HEADER include/spdk/endian.h 00:02:09.667 TEST_HEADER include/spdk/env_dpdk.h 00:02:09.667 TEST_HEADER include/spdk/env.h 00:02:09.667 TEST_HEADER include/spdk/event.h 00:02:09.667 TEST_HEADER include/spdk/fd.h 00:02:09.667 TEST_HEADER include/spdk/fd_group.h 00:02:09.667 TEST_HEADER include/spdk/fsdev_module.h 00:02:09.667 TEST_HEADER include/spdk/file.h 00:02:09.667 TEST_HEADER include/spdk/fsdev.h 00:02:09.667 TEST_HEADER include/spdk/ftl.h 00:02:09.667 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:09.667 TEST_HEADER include/spdk/gpt_spec.h 00:02:09.667 TEST_HEADER include/spdk/hexlify.h 00:02:09.667 TEST_HEADER include/spdk/histogram_data.h 00:02:09.667 TEST_HEADER include/spdk/idxd.h 00:02:09.667 TEST_HEADER include/spdk/idxd_spec.h 00:02:09.667 TEST_HEADER include/spdk/init.h 00:02:09.668 TEST_HEADER include/spdk/ioat.h 00:02:09.668 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:09.668 CC app/iscsi_tgt/iscsi_tgt.o 00:02:09.668 TEST_HEADER include/spdk/ioat_spec.h 00:02:09.668 TEST_HEADER include/spdk/iscsi_spec.h 00:02:09.668 TEST_HEADER include/spdk/jsonrpc.h 00:02:09.668 TEST_HEADER include/spdk/json.h 00:02:09.668 TEST_HEADER include/spdk/keyring.h 00:02:09.668 TEST_HEADER include/spdk/keyring_module.h 00:02:09.668 TEST_HEADER include/spdk/log.h 00:02:09.668 TEST_HEADER include/spdk/likely.h 00:02:09.668 TEST_HEADER include/spdk/lvol.h 00:02:09.668 CC app/spdk_dd/spdk_dd.o 00:02:09.668 TEST_HEADER include/spdk/memory.h 00:02:09.668 TEST_HEADER include/spdk/mmio.h 00:02:09.668 TEST_HEADER include/spdk/md5.h 00:02:09.668 TEST_HEADER include/spdk/nbd.h 00:02:09.668 TEST_HEADER include/spdk/notify.h 00:02:09.668 TEST_HEADER include/spdk/net.h 00:02:09.668 TEST_HEADER include/spdk/nvme.h 00:02:09.668 TEST_HEADER include/spdk/nvme_intel.h 00:02:09.668 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:09.668 TEST_HEADER include/spdk/nvme_spec.h 00:02:09.668 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:09.668 TEST_HEADER include/spdk/nvme_zns.h 00:02:09.668 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:09.668 TEST_HEADER include/spdk/nvmf.h 00:02:09.668 TEST_HEADER include/spdk/nvmf_spec.h 00:02:09.668 CC app/nvmf_tgt/nvmf_main.o 00:02:09.668 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:09.668 TEST_HEADER include/spdk/nvmf_transport.h 00:02:09.668 TEST_HEADER include/spdk/opal.h 00:02:09.668 TEST_HEADER include/spdk/opal_spec.h 00:02:09.668 TEST_HEADER include/spdk/pci_ids.h 00:02:09.668 TEST_HEADER include/spdk/pipe.h 00:02:09.668 TEST_HEADER include/spdk/queue.h 00:02:09.668 TEST_HEADER include/spdk/rpc.h 00:02:09.668 TEST_HEADER include/spdk/reduce.h 00:02:09.668 TEST_HEADER include/spdk/scheduler.h 00:02:09.668 TEST_HEADER include/spdk/scsi.h 00:02:09.668 CC app/spdk_tgt/spdk_tgt.o 00:02:09.668 TEST_HEADER include/spdk/scsi_spec.h 00:02:09.668 TEST_HEADER include/spdk/sock.h 00:02:09.668 TEST_HEADER include/spdk/string.h 00:02:09.668 TEST_HEADER include/spdk/stdinc.h 00:02:09.668 TEST_HEADER include/spdk/thread.h 00:02:09.668 TEST_HEADER include/spdk/trace.h 00:02:09.668 TEST_HEADER include/spdk/trace_parser.h 00:02:09.668 TEST_HEADER include/spdk/ublk.h 00:02:09.668 TEST_HEADER include/spdk/tree.h 00:02:09.668 TEST_HEADER include/spdk/util.h 00:02:09.668 TEST_HEADER include/spdk/uuid.h 00:02:09.668 TEST_HEADER include/spdk/version.h 00:02:09.668 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:09.668 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:09.668 TEST_HEADER include/spdk/vhost.h 00:02:09.668 TEST_HEADER include/spdk/vmd.h 00:02:09.668 TEST_HEADER include/spdk/xor.h 00:02:09.668 CXX test/cpp_headers/accel.o 00:02:09.668 TEST_HEADER include/spdk/zipf.h 00:02:09.668 CXX test/cpp_headers/accel_module.o 00:02:09.668 CXX test/cpp_headers/assert.o 00:02:09.668 CXX test/cpp_headers/barrier.o 00:02:09.668 CXX test/cpp_headers/base64.o 00:02:09.668 CXX test/cpp_headers/bdev.o 00:02:09.668 CXX test/cpp_headers/bdev_module.o 00:02:09.668 CXX test/cpp_headers/bdev_zone.o 00:02:09.668 CXX test/cpp_headers/bit_array.o 00:02:09.668 CXX test/cpp_headers/blob_bdev.o 00:02:09.668 CXX test/cpp_headers/blobfs.o 00:02:09.668 CXX test/cpp_headers/bit_pool.o 00:02:09.668 CXX test/cpp_headers/blobfs_bdev.o 00:02:09.668 CXX test/cpp_headers/conf.o 00:02:09.668 CXX test/cpp_headers/blob.o 00:02:09.668 CXX test/cpp_headers/config.o 00:02:09.668 CXX test/cpp_headers/crc16.o 00:02:09.668 CXX test/cpp_headers/cpuset.o 00:02:09.668 CXX test/cpp_headers/crc32.o 00:02:09.668 CXX test/cpp_headers/crc64.o 00:02:09.668 CXX test/cpp_headers/endian.o 00:02:09.668 CXX test/cpp_headers/dif.o 00:02:09.668 CXX test/cpp_headers/dma.o 00:02:09.668 CXX test/cpp_headers/env.o 00:02:09.668 CXX test/cpp_headers/env_dpdk.o 00:02:09.668 CXX test/cpp_headers/event.o 00:02:09.668 CXX test/cpp_headers/fd_group.o 00:02:09.668 CXX test/cpp_headers/fsdev.o 00:02:09.668 CXX test/cpp_headers/fd.o 00:02:09.668 CXX test/cpp_headers/file.o 00:02:09.668 CXX test/cpp_headers/fuse_dispatcher.o 00:02:09.668 CXX test/cpp_headers/ftl.o 00:02:09.668 CXX test/cpp_headers/fsdev_module.o 00:02:09.668 CXX test/cpp_headers/gpt_spec.o 00:02:09.668 CXX test/cpp_headers/histogram_data.o 00:02:09.668 CXX test/cpp_headers/hexlify.o 00:02:09.668 CXX test/cpp_headers/idxd.o 00:02:09.668 CXX test/cpp_headers/idxd_spec.o 00:02:09.668 CXX test/cpp_headers/ioat.o 00:02:09.668 CXX test/cpp_headers/init.o 00:02:09.668 CXX test/cpp_headers/iscsi_spec.o 00:02:09.668 CXX test/cpp_headers/ioat_spec.o 00:02:09.668 CXX test/cpp_headers/json.o 00:02:09.668 CXX test/cpp_headers/keyring_module.o 00:02:09.668 CXX test/cpp_headers/jsonrpc.o 00:02:09.668 CXX test/cpp_headers/keyring.o 00:02:09.668 CXX test/cpp_headers/likely.o 00:02:09.668 CXX test/cpp_headers/lvol.o 00:02:09.668 CXX test/cpp_headers/log.o 00:02:09.668 CXX test/cpp_headers/mmio.o 00:02:09.668 CXX test/cpp_headers/md5.o 00:02:09.668 CXX test/cpp_headers/nbd.o 00:02:09.668 CXX test/cpp_headers/memory.o 00:02:09.668 CXX test/cpp_headers/net.o 00:02:09.668 CXX test/cpp_headers/notify.o 00:02:09.668 CXX test/cpp_headers/nvme_intel.o 00:02:09.668 CXX test/cpp_headers/nvme_ocssd.o 00:02:09.668 CC app/fio/nvme/fio_plugin.o 00:02:09.668 CXX test/cpp_headers/nvme.o 00:02:09.668 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:09.668 CXX test/cpp_headers/nvme_zns.o 00:02:09.668 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:09.668 CXX test/cpp_headers/nvme_spec.o 00:02:09.668 CC examples/ioat/perf/perf.o 00:02:09.668 CXX test/cpp_headers/nvmf_cmd.o 00:02:09.668 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:09.668 CXX test/cpp_headers/nvmf.o 00:02:09.668 CC test/app/histogram_perf/histogram_perf.o 00:02:09.668 CC examples/util/zipf/zipf.o 00:02:09.668 CC test/env/vtophys/vtophys.o 00:02:09.668 CC examples/ioat/verify/verify.o 00:02:09.668 CXX test/cpp_headers/nvmf_spec.o 00:02:09.668 CC test/thread/poller_perf/poller_perf.o 00:02:09.668 CC test/env/memory/memory_ut.o 00:02:09.668 CC test/app/bdev_svc/bdev_svc.o 00:02:09.668 CC test/app/jsoncat/jsoncat.o 00:02:09.668 CC test/dma/test_dma/test_dma.o 00:02:09.668 CC app/fio/bdev/fio_plugin.o 00:02:09.668 CXX test/cpp_headers/nvmf_transport.o 00:02:09.668 CC test/env/pci/pci_ut.o 00:02:09.668 CC test/app/stub/stub.o 00:02:09.958 LINK spdk_nvme_discover 00:02:09.958 LINK spdk_lspci 00:02:09.958 LINK interrupt_tgt 00:02:09.958 LINK spdk_trace_record 00:02:09.958 LINK rpc_client_test 00:02:10.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:10.218 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:10.218 CC test/env/mem_callbacks/mem_callbacks.o 00:02:10.218 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:10.218 LINK poller_perf 00:02:10.218 LINK histogram_perf 00:02:10.218 LINK spdk_tgt 00:02:10.218 LINK nvmf_tgt 00:02:10.218 LINK env_dpdk_post_init 00:02:10.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:10.218 CXX test/cpp_headers/opal.o 00:02:10.218 CXX test/cpp_headers/pci_ids.o 00:02:10.218 CXX test/cpp_headers/opal_spec.o 00:02:10.218 CXX test/cpp_headers/pipe.o 00:02:10.218 CXX test/cpp_headers/queue.o 00:02:10.218 LINK bdev_svc 00:02:10.218 CXX test/cpp_headers/rpc.o 00:02:10.218 CXX test/cpp_headers/reduce.o 00:02:10.218 CXX test/cpp_headers/scheduler.o 00:02:10.218 CXX test/cpp_headers/scsi.o 00:02:10.218 CXX test/cpp_headers/scsi_spec.o 00:02:10.218 CXX test/cpp_headers/sock.o 00:02:10.218 CXX test/cpp_headers/stdinc.o 00:02:10.218 LINK verify 00:02:10.218 CXX test/cpp_headers/string.o 00:02:10.218 CXX test/cpp_headers/thread.o 00:02:10.218 CXX test/cpp_headers/trace.o 00:02:10.218 CXX test/cpp_headers/trace_parser.o 00:02:10.218 CXX test/cpp_headers/tree.o 00:02:10.218 CXX test/cpp_headers/util.o 00:02:10.218 CXX test/cpp_headers/ublk.o 00:02:10.218 CXX test/cpp_headers/uuid.o 00:02:10.218 CXX test/cpp_headers/vfio_user_pci.o 00:02:10.218 CXX test/cpp_headers/version.o 00:02:10.218 CXX test/cpp_headers/vfio_user_spec.o 00:02:10.218 LINK jsoncat 00:02:10.218 LINK iscsi_tgt 00:02:10.218 LINK vtophys 00:02:10.218 CXX test/cpp_headers/vhost.o 00:02:10.218 CXX test/cpp_headers/vmd.o 00:02:10.218 CXX test/cpp_headers/xor.o 00:02:10.218 CXX test/cpp_headers/zipf.o 00:02:10.218 LINK zipf 00:02:10.218 LINK ioat_perf 00:02:10.477 LINK spdk_dd 00:02:10.477 LINK stub 00:02:10.477 LINK pci_ut 00:02:10.477 LINK spdk_trace 00:02:10.736 LINK test_dma 00:02:10.736 CC test/event/reactor/reactor.o 00:02:10.736 CC test/event/event_perf/event_perf.o 00:02:10.736 CC test/event/app_repeat/app_repeat.o 00:02:10.736 CC test/event/reactor_perf/reactor_perf.o 00:02:10.736 CC examples/idxd/perf/perf.o 00:02:10.736 LINK nvme_fuzz 00:02:10.736 LINK spdk_bdev 00:02:10.736 LINK spdk_nvme 00:02:10.736 CC test/event/scheduler/scheduler.o 00:02:10.736 CC examples/sock/hello_world/hello_sock.o 00:02:10.736 CC examples/vmd/led/led.o 00:02:10.736 CC examples/vmd/lsvmd/lsvmd.o 00:02:10.736 LINK vhost_fuzz 00:02:10.736 CC examples/thread/thread/thread_ex.o 00:02:10.736 LINK reactor 00:02:10.736 LINK mem_callbacks 00:02:10.736 LINK reactor_perf 00:02:10.995 LINK spdk_nvme_perf 00:02:10.995 LINK event_perf 00:02:10.995 LINK app_repeat 00:02:10.995 LINK spdk_top 00:02:10.995 LINK led 00:02:10.995 CC app/vhost/vhost.o 00:02:10.995 LINK lsvmd 00:02:10.995 LINK spdk_nvme_identify 00:02:10.995 LINK hello_sock 00:02:10.995 LINK scheduler 00:02:10.995 LINK idxd_perf 00:02:10.995 LINK thread 00:02:10.996 CC test/nvme/overhead/overhead.o 00:02:10.996 CC test/nvme/sgl/sgl.o 00:02:10.996 CC test/nvme/fused_ordering/fused_ordering.o 00:02:10.996 CC test/nvme/e2edp/nvme_dp.o 00:02:11.254 CC test/nvme/fdp/fdp.o 00:02:11.254 CC test/nvme/reserve/reserve.o 00:02:11.254 CC test/nvme/connect_stress/connect_stress.o 00:02:11.254 CC test/nvme/reset/reset.o 00:02:11.254 CC test/nvme/aer/aer.o 00:02:11.254 CC test/nvme/compliance/nvme_compliance.o 00:02:11.254 CC test/nvme/boot_partition/boot_partition.o 00:02:11.254 CC test/nvme/simple_copy/simple_copy.o 00:02:11.254 CC test/blobfs/mkfs/mkfs.o 00:02:11.254 CC test/nvme/startup/startup.o 00:02:11.254 CC test/nvme/cuse/cuse.o 00:02:11.254 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:11.254 CC test/nvme/err_injection/err_injection.o 00:02:11.254 LINK vhost 00:02:11.254 CC test/accel/dif/dif.o 00:02:11.254 LINK memory_ut 00:02:11.254 CC test/lvol/esnap/esnap.o 00:02:11.254 LINK boot_partition 00:02:11.254 LINK startup 00:02:11.254 LINK connect_stress 00:02:11.254 LINK doorbell_aers 00:02:11.254 LINK fused_ordering 00:02:11.254 LINK reserve 00:02:11.254 LINK err_injection 00:02:11.254 LINK simple_copy 00:02:11.254 LINK mkfs 00:02:11.254 LINK sgl 00:02:11.514 LINK reset 00:02:11.514 LINK overhead 00:02:11.514 LINK nvme_dp 00:02:11.514 LINK aer 00:02:11.514 CC examples/nvme/abort/abort.o 00:02:11.514 CC examples/nvme/arbitration/arbitration.o 00:02:11.514 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:11.514 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:11.514 CC examples/nvme/reconnect/reconnect.o 00:02:11.514 LINK nvme_compliance 00:02:11.514 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:11.514 CC examples/nvme/hotplug/hotplug.o 00:02:11.514 LINK fdp 00:02:11.514 CC examples/nvme/hello_world/hello_world.o 00:02:11.514 CC examples/accel/perf/accel_perf.o 00:02:11.514 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:11.514 CC examples/blob/hello_world/hello_blob.o 00:02:11.514 CC examples/blob/cli/blobcli.o 00:02:11.514 LINK pmr_persistence 00:02:11.514 LINK cmb_copy 00:02:11.773 LINK hotplug 00:02:11.773 LINK iscsi_fuzz 00:02:11.773 LINK hello_world 00:02:11.773 LINK arbitration 00:02:11.773 LINK abort 00:02:11.773 LINK reconnect 00:02:11.773 LINK dif 00:02:11.773 LINK hello_blob 00:02:11.773 LINK nvme_manage 00:02:11.773 LINK hello_fsdev 00:02:12.033 LINK accel_perf 00:02:12.033 LINK blobcli 00:02:12.292 LINK cuse 00:02:12.292 CC test/bdev/bdevio/bdevio.o 00:02:12.292 CC examples/bdev/hello_world/hello_bdev.o 00:02:12.292 CC examples/bdev/bdevperf/bdevperf.o 00:02:12.551 LINK bdevio 00:02:12.551 LINK hello_bdev 00:02:13.120 LINK bdevperf 00:02:13.689 CC examples/nvmf/nvmf/nvmf.o 00:02:13.689 LINK nvmf 00:02:15.119 LINK esnap 00:02:15.119 00:02:15.119 real 0m55.451s 00:02:15.119 user 8m0.919s 00:02:15.119 sys 3m40.654s 00:02:15.119 17:20:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.119 17:20:17 make -- common/autotest_common.sh@10 -- $ set +x 00:02:15.119 ************************************ 00:02:15.119 END TEST make 00:02:15.119 ************************************ 00:02:15.119 17:20:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:15.119 17:20:17 -- pm/common@31 -- $ signal_monitor_resources TERM 00:02:15.119 17:20:17 -- pm/common@42 -- $ local monitor pid pids signal=TERM 00:02:15.119 17:20:17 -- pm/common@44 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.119 17:20:17 -- pm/common@45 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:15.119 17:20:17 -- pm/common@46 -- $ pid=3179474 00:02:15.119 17:20:17 -- pm/common@52 -- $ kill -TERM 3179474 00:02:15.119 17:20:17 -- pm/common@44 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.119 17:20:17 -- pm/common@45 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:15.119 17:20:17 -- pm/common@46 -- $ pid=3179475 00:02:15.119 17:20:17 -- pm/common@52 -- $ kill -TERM 3179475 00:02:15.119 17:20:17 -- pm/common@44 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.119 17:20:17 -- pm/common@45 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:15.119 17:20:17 -- pm/common@46 -- $ pid=3179478 00:02:15.119 17:20:17 -- pm/common@52 -- $ kill -TERM 3179478 00:02:15.119 17:20:17 -- pm/common@44 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.119 17:20:17 -- pm/common@45 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:15.119 17:20:17 -- pm/common@46 -- $ pid=3179501 00:02:15.119 17:20:17 -- pm/common@52 -- $ sudo -E kill -TERM 3179501 00:02:15.119 17:20:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:15.119 17:20:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.427 17:20:17 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:15.427 17:20:17 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:15.427 17:20:17 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:15.427 17:20:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:15.427 17:20:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:15.427 17:20:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:15.427 17:20:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:15.427 17:20:17 -- scripts/common.sh@336 -- # IFS=.-: 00:02:15.427 17:20:17 -- scripts/common.sh@336 -- # read -ra ver1 00:02:15.427 17:20:17 -- scripts/common.sh@337 -- # IFS=.-: 00:02:15.427 17:20:17 -- scripts/common.sh@337 -- # read -ra ver2 00:02:15.427 17:20:17 -- scripts/common.sh@338 -- # local 'op=<' 00:02:15.427 17:20:17 -- scripts/common.sh@340 -- # ver1_l=2 00:02:15.427 17:20:17 -- scripts/common.sh@341 -- # ver2_l=1 00:02:15.427 17:20:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:15.427 17:20:17 -- scripts/common.sh@344 -- # case "$op" in 00:02:15.427 17:20:17 -- scripts/common.sh@345 -- # : 1 00:02:15.427 17:20:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:15.427 17:20:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.427 17:20:17 -- scripts/common.sh@365 -- # decimal 1 00:02:15.427 17:20:17 -- scripts/common.sh@353 -- # local d=1 00:02:15.427 17:20:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:15.427 17:20:17 -- scripts/common.sh@355 -- # echo 1 00:02:15.427 17:20:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:15.427 17:20:17 -- scripts/common.sh@366 -- # decimal 2 00:02:15.427 17:20:17 -- scripts/common.sh@353 -- # local d=2 00:02:15.427 17:20:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:15.427 17:20:17 -- scripts/common.sh@355 -- # echo 2 00:02:15.427 17:20:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:15.427 17:20:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:15.427 17:20:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:15.427 17:20:17 -- scripts/common.sh@368 -- # return 0 00:02:15.427 17:20:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:15.427 17:20:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:15.427 --rc genhtml_branch_coverage=1 00:02:15.427 --rc genhtml_function_coverage=1 00:02:15.427 --rc genhtml_legend=1 00:02:15.427 --rc geninfo_all_blocks=1 00:02:15.427 --rc geninfo_unexecuted_blocks=1 00:02:15.427 00:02:15.427 ' 00:02:15.427 17:20:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:15.427 --rc genhtml_branch_coverage=1 00:02:15.427 --rc genhtml_function_coverage=1 00:02:15.427 --rc genhtml_legend=1 00:02:15.427 --rc geninfo_all_blocks=1 00:02:15.427 --rc geninfo_unexecuted_blocks=1 00:02:15.427 00:02:15.427 ' 00:02:15.427 17:20:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:15.427 --rc genhtml_branch_coverage=1 00:02:15.427 --rc genhtml_function_coverage=1 00:02:15.427 --rc genhtml_legend=1 00:02:15.427 --rc geninfo_all_blocks=1 00:02:15.427 --rc geninfo_unexecuted_blocks=1 00:02:15.427 00:02:15.427 ' 00:02:15.427 17:20:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:15.427 --rc genhtml_branch_coverage=1 00:02:15.427 --rc genhtml_function_coverage=1 00:02:15.427 --rc genhtml_legend=1 00:02:15.427 --rc geninfo_all_blocks=1 00:02:15.428 --rc geninfo_unexecuted_blocks=1 00:02:15.428 00:02:15.428 ' 00:02:15.428 17:20:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:15.428 17:20:17 -- nvmf/common.sh@7 -- # uname -s 00:02:15.428 17:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:15.428 17:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:15.428 17:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:15.428 17:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:15.428 17:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:15.428 17:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:15.428 17:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:15.428 17:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:15.428 17:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:15.428 17:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:15.428 17:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:15.428 17:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:15.428 17:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:15.428 17:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:15.428 17:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:15.428 17:20:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:15.428 17:20:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:15.428 17:20:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:15.428 17:20:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:15.428 17:20:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.428 17:20:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.428 17:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.428 17:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.428 17:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.428 17:20:17 -- paths/export.sh@5 -- # export PATH 00:02:15.428 17:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.428 17:20:17 -- nvmf/common.sh@51 -- # : 0 00:02:15.428 17:20:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:15.428 17:20:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:15.428 17:20:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:15.428 17:20:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:15.428 17:20:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:15.428 17:20:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:15.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:15.428 17:20:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:15.428 17:20:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:15.428 17:20:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:15.428 17:20:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:15.428 17:20:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:15.428 17:20:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:15.428 17:20:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:15.428 17:20:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:15.428 17:20:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:15.428 17:20:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:15.428 17:20:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:15.428 17:20:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:15.428 17:20:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:15.428 17:20:17 -- spdk/autotest.sh@48 -- # udevadm_pid=3241931 00:02:15.428 17:20:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:15.428 17:20:17 -- pm/common@17 -- # local monitor 00:02:15.428 17:20:17 -- pm/common@19 -- # [[ -z '' ]] 00:02:15.428 17:20:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:15.428 17:20:17 -- pm/common@21 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.428 17:20:17 -- pm/common@21 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.428 17:20:17 -- pm/common@21 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.428 17:20:17 -- pm/common@21 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.428 17:20:17 -- pm/common@23 -- # date +%s 00:02:15.428 17:20:17 -- pm/common@23 -- # date +%s 00:02:15.428 17:20:17 -- pm/common@27 -- # sleep 1 00:02:15.428 17:20:17 -- pm/common@23 -- # date +%s 00:02:15.428 17:20:17 -- pm/common@23 -- # date +%s 00:02:15.428 17:20:17 -- pm/common@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732033217 00:02:15.428 17:20:17 -- pm/common@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732033217 00:02:15.428 17:20:17 -- pm/common@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732033217 00:02:15.428 17:20:17 -- pm/common@23 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732033217 00:02:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732033217_collect-cpu-load.pm.log 00:02:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732033217_collect-vmstat.pm.log 00:02:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732033217_collect-cpu-temp.pm.log 00:02:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732033217_collect-bmc-pm.bmc.pm.log 00:02:16.367 17:20:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:16.367 17:20:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:16.367 17:20:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:16.367 17:20:18 -- common/autotest_common.sh@10 -- # set +x 00:02:16.367 17:20:18 -- spdk/autotest.sh@59 -- # create_test_list 00:02:16.367 17:20:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:16.367 17:20:18 -- common/autotest_common.sh@10 -- # set +x 00:02:16.367 17:20:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:16.367 17:20:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.367 17:20:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.367 17:20:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:16.367 17:20:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.367 17:20:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:16.367 17:20:18 -- common/autotest_common.sh@1457 -- # uname 00:02:16.367 17:20:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:16.367 17:20:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:16.367 17:20:18 -- common/autotest_common.sh@1477 -- # uname 00:02:16.367 17:20:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:16.367 17:20:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:16.367 17:20:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:16.626 lcov: LCOV version 1.15 00:02:16.626 17:20:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:34.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:34.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:42.840 17:20:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:42.840 17:20:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:42.840 17:20:43 -- common/autotest_common.sh@10 -- # set +x 00:02:42.840 17:20:43 -- spdk/autotest.sh@78 -- # rm -f 00:02:42.840 17:20:43 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.220 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:44.220 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.220 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.479 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.738 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.738 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.738 17:20:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:44.738 17:20:46 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:44.738 17:20:46 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:44.738 17:20:46 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:44.738 17:20:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:44.738 17:20:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:44.738 17:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:44.738 17:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.738 17:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:44.738 17:20:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:44.738 17:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.738 17:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:44.738 17:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:44.738 17:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:44.738 17:20:46 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.738 No valid GPT data, bailing 00:02:44.738 17:20:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.738 17:20:46 -- scripts/common.sh@394 -- # pt= 00:02:44.738 17:20:46 -- scripts/common.sh@395 -- # return 1 00:02:44.738 17:20:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.738 1+0 records in 00:02:44.738 1+0 records out 00:02:44.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460277 s, 228 MB/s 00:02:44.738 17:20:46 -- spdk/autotest.sh@105 -- # sync 00:02:44.738 17:20:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.738 17:20:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.738 17:20:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:51.326 17:20:52 -- spdk/autotest.sh@111 -- # uname -s 00:02:51.326 17:20:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:51.326 17:20:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:51.326 17:20:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:53.233 Hugepages 00:02:53.233 node hugesize free / total 00:02:53.233 node0 1048576kB 0 / 0 00:02:53.233 node0 2048kB 0 / 0 00:02:53.233 node1 1048576kB 0 / 0 00:02:53.233 node1 2048kB 0 / 0 00:02:53.233 00:02:53.233 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.233 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:53.233 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:53.233 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:53.233 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:53.233 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:53.233 17:20:55 -- spdk/autotest.sh@117 -- # uname -s 00:02:53.233 17:20:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:53.233 17:20:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:53.233 17:20:55 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.526 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:56.526 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:57.097 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.097 17:20:59 -- common/autotest_common.sh@1517 -- # sleep 1 00:02:58.036 17:21:00 -- common/autotest_common.sh@1518 -- # bdfs=() 00:02:58.036 17:21:00 -- common/autotest_common.sh@1518 -- # local bdfs 00:02:58.036 17:21:00 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:02:58.036 17:21:00 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:02:58.036 17:21:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:58.036 17:21:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:58.036 17:21:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:58.036 17:21:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:58.036 17:21:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:58.296 17:21:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:58.296 17:21:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:02:58.296 17:21:00 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.835 Waiting for block devices as requested 00:03:01.094 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:01.094 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:01.095 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:01.354 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:01.354 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:01.354 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:01.613 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:01.613 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:01.613 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:01.613 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:01.873 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:01.873 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:01.873 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:02.133 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:02.133 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:02.133 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:02.392 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:02.392 17:21:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:02.392 17:21:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:02.392 17:21:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:02.392 17:21:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:02.392 17:21:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:02.392 17:21:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:02.392 17:21:04 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:02.392 17:21:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:02.392 17:21:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:02.392 17:21:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:02.392 17:21:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:02.392 17:21:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:02.392 17:21:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:02.392 17:21:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:02.392 17:21:04 -- common/autotest_common.sh@1543 -- # continue 00:03:02.392 17:21:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:02.392 17:21:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:02.392 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:03:02.392 17:21:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:02.392 17:21:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:02.392 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:03:02.392 17:21:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.701 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.270 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.270 17:21:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:06.270 17:21:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:06.270 17:21:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.270 17:21:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:06.270 17:21:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:06.270 17:21:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:06.270 17:21:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:06.270 17:21:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:06.270 17:21:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:06.270 17:21:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:06.530 17:21:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:06.530 17:21:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:06.530 17:21:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:06.530 17:21:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:06.530 17:21:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:06.530 17:21:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:06.530 17:21:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:06.530 17:21:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:06.530 17:21:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:06.530 17:21:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:06.530 17:21:08 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:06.530 17:21:08 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:06.530 17:21:08 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:06.530 17:21:08 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:06.530 17:21:08 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:06.530 17:21:08 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:06.530 17:21:08 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3256891 00:03:06.530 17:21:08 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:06.530 17:21:08 -- common/autotest_common.sh@1585 -- # waitforlisten 3256891 00:03:06.530 17:21:08 -- common/autotest_common.sh@835 -- # '[' -z 3256891 ']' 00:03:06.530 17:21:08 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:06.530 17:21:08 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:06.530 17:21:08 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:06.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:06.530 17:21:08 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:06.530 17:21:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.530 [2024-11-19 17:21:08.632751] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:06.530 [2024-11-19 17:21:08.632800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256891 ] 00:03:06.530 [2024-11-19 17:21:08.708159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:06.530 [2024-11-19 17:21:08.748618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:06.788 17:21:08 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:06.788 17:21:08 -- common/autotest_common.sh@868 -- # return 0 00:03:06.788 17:21:08 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:06.788 17:21:08 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:06.788 17:21:08 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:10.081 nvme0n1 00:03:10.081 17:21:11 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:10.081 [2024-11-19 17:21:12.157897] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:10.081 request: 00:03:10.081 { 00:03:10.081 "nvme_ctrlr_name": "nvme0", 00:03:10.081 "password": "test", 00:03:10.081 "method": "bdev_nvme_opal_revert", 00:03:10.081 "req_id": 1 00:03:10.081 } 00:03:10.081 Got JSON-RPC error response 00:03:10.081 response: 00:03:10.081 { 00:03:10.081 "code": -32602, 00:03:10.081 "message": "Invalid parameters" 00:03:10.081 } 00:03:10.081 17:21:12 -- common/autotest_common.sh@1591 -- # true 00:03:10.081 17:21:12 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:10.081 17:21:12 -- common/autotest_common.sh@1595 -- # killprocess 3256891 00:03:10.081 17:21:12 -- common/autotest_common.sh@954 -- # '[' -z 3256891 ']' 00:03:10.081 17:21:12 -- common/autotest_common.sh@958 -- # kill -0 3256891 00:03:10.081 17:21:12 -- common/autotest_common.sh@959 -- # uname 00:03:10.081 17:21:12 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:10.081 17:21:12 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3256891 00:03:10.081 17:21:12 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:10.081 17:21:12 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:10.081 17:21:12 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3256891' 00:03:10.081 killing process with pid 3256891 00:03:10.081 17:21:12 -- common/autotest_common.sh@973 -- # kill 3256891 00:03:10.081 17:21:12 -- common/autotest_common.sh@978 -- # wait 3256891 00:03:11.988 17:21:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:11.988 17:21:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:11.988 17:21:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:11.988 17:21:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:11.988 17:21:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:11.988 17:21:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.988 17:21:13 -- common/autotest_common.sh@10 -- # set +x 00:03:11.988 17:21:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:11.988 17:21:13 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:11.988 17:21:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:11.988 17:21:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:11.988 17:21:13 -- common/autotest_common.sh@10 -- # set +x 00:03:11.988 ************************************ 00:03:11.988 START TEST env 00:03:11.988 ************************************ 00:03:11.988 17:21:13 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:11.988 * Looking for test storage... 00:03:11.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:11.988 17:21:13 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.988 17:21:13 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.988 17:21:13 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.988 17:21:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.988 17:21:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.988 17:21:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.988 17:21:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.988 17:21:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.988 17:21:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.988 17:21:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.988 17:21:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.988 17:21:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.988 17:21:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.988 17:21:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.988 17:21:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.988 17:21:14 env -- scripts/common.sh@344 -- # case "$op" in 00:03:11.988 17:21:14 env -- scripts/common.sh@345 -- # : 1 00:03:11.988 17:21:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.988 17:21:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.988 17:21:14 env -- scripts/common.sh@365 -- # decimal 1 00:03:11.988 17:21:14 env -- scripts/common.sh@353 -- # local d=1 00:03:11.988 17:21:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.988 17:21:14 env -- scripts/common.sh@355 -- # echo 1 00:03:11.988 17:21:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.988 17:21:14 env -- scripts/common.sh@366 -- # decimal 2 00:03:11.988 17:21:14 env -- scripts/common.sh@353 -- # local d=2 00:03:11.988 17:21:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.988 17:21:14 env -- scripts/common.sh@355 -- # echo 2 00:03:11.988 17:21:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.988 17:21:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.988 17:21:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.988 17:21:14 env -- scripts/common.sh@368 -- # return 0 00:03:11.988 17:21:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.988 17:21:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.988 --rc genhtml_branch_coverage=1 00:03:11.988 --rc genhtml_function_coverage=1 00:03:11.989 --rc genhtml_legend=1 00:03:11.989 --rc geninfo_all_blocks=1 00:03:11.989 --rc geninfo_unexecuted_blocks=1 00:03:11.989 00:03:11.989 ' 00:03:11.989 17:21:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.989 --rc genhtml_branch_coverage=1 00:03:11.989 --rc genhtml_function_coverage=1 00:03:11.989 --rc genhtml_legend=1 00:03:11.989 --rc geninfo_all_blocks=1 00:03:11.989 --rc geninfo_unexecuted_blocks=1 00:03:11.989 00:03:11.989 ' 00:03:11.989 17:21:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.989 --rc genhtml_branch_coverage=1 00:03:11.989 --rc genhtml_function_coverage=1 00:03:11.989 --rc genhtml_legend=1 00:03:11.989 --rc geninfo_all_blocks=1 00:03:11.989 --rc geninfo_unexecuted_blocks=1 00:03:11.989 00:03:11.989 ' 00:03:11.989 17:21:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.989 --rc genhtml_branch_coverage=1 00:03:11.989 --rc genhtml_function_coverage=1 00:03:11.989 --rc genhtml_legend=1 00:03:11.989 --rc geninfo_all_blocks=1 00:03:11.989 --rc geninfo_unexecuted_blocks=1 00:03:11.989 00:03:11.989 ' 00:03:11.989 17:21:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:11.989 17:21:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:11.989 17:21:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:11.989 17:21:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:11.989 ************************************ 00:03:11.989 START TEST env_memory 00:03:11.989 ************************************ 00:03:11.989 17:21:14 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:11.989 00:03:11.989 00:03:11.989 CUnit - A unit testing framework for C - Version 2.1-3 00:03:11.989 http://cunit.sourceforge.net/ 00:03:11.989 00:03:11.989 00:03:11.989 Suite: memory 00:03:11.989 Test: alloc and free memory map ...[2024-11-19 17:21:14.115488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:11.989 passed 00:03:11.989 Test: mem map translation ...[2024-11-19 17:21:14.134530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:11.989 [2024-11-19 17:21:14.134544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:11.989 [2024-11-19 17:21:14.134593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:11.989 [2024-11-19 17:21:14.134600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:11.989 passed 00:03:11.989 Test: mem map registration ...[2024-11-19 17:21:14.172610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:11.989 [2024-11-19 17:21:14.172624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:11.989 passed 00:03:12.249 Test: mem map adjacent registrations ...passed 00:03:12.249 00:03:12.249 Run Summary: Type Total Ran Passed Failed Inactive 00:03:12.249 suites 1 1 n/a 0 0 00:03:12.249 tests 4 4 4 0 0 00:03:12.249 asserts 152 152 152 0 n/a 00:03:12.249 00:03:12.249 Elapsed time = 0.140 seconds 00:03:12.249 00:03:12.249 real 0m0.153s 00:03:12.249 user 0m0.144s 00:03:12.249 sys 0m0.009s 00:03:12.249 17:21:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:12.249 17:21:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:12.249 ************************************ 00:03:12.249 END TEST env_memory 00:03:12.249 ************************************ 00:03:12.249 17:21:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:12.249 17:21:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:12.249 17:21:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:12.249 17:21:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.249 ************************************ 00:03:12.249 START TEST env_vtophys 00:03:12.249 ************************************ 00:03:12.249 17:21:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:12.249 EAL: lib.eal log level changed from notice to debug 00:03:12.249 EAL: Detected lcore 0 as core 0 on socket 0 00:03:12.249 EAL: Detected lcore 1 as core 1 on socket 0 00:03:12.249 EAL: Detected lcore 2 as core 2 on socket 0 00:03:12.249 EAL: Detected lcore 3 as core 3 on socket 0 00:03:12.249 EAL: Detected lcore 4 as core 4 on socket 0 00:03:12.249 EAL: Detected lcore 5 as core 5 on socket 0 00:03:12.249 EAL: Detected lcore 6 as core 6 on socket 0 00:03:12.249 EAL: Detected lcore 7 as core 8 on socket 0 00:03:12.249 EAL: Detected lcore 8 as core 9 on socket 0 00:03:12.249 EAL: Detected lcore 9 as core 10 on socket 0 00:03:12.249 EAL: Detected lcore 10 as core 11 on socket 0 00:03:12.249 EAL: Detected lcore 11 as core 12 on socket 0 00:03:12.249 EAL: Detected lcore 12 as core 13 on socket 0 00:03:12.249 EAL: Detected lcore 13 as core 16 on socket 0 00:03:12.249 EAL: Detected lcore 14 as core 17 on socket 0 00:03:12.249 EAL: Detected lcore 15 as core 18 on socket 0 00:03:12.249 EAL: Detected lcore 16 as core 19 on socket 0 00:03:12.249 EAL: Detected lcore 17 as core 20 on socket 0 00:03:12.249 EAL: Detected lcore 18 as core 21 on socket 0 00:03:12.249 EAL: Detected lcore 19 as core 25 on socket 0 00:03:12.249 EAL: Detected lcore 20 as core 26 on socket 0 00:03:12.249 EAL: Detected lcore 21 as core 27 on socket 0 00:03:12.249 EAL: Detected lcore 22 as core 28 on socket 0 00:03:12.249 EAL: Detected lcore 23 as core 29 on socket 0 00:03:12.249 EAL: Detected lcore 24 as core 0 on socket 1 00:03:12.249 EAL: Detected lcore 25 as core 1 on socket 1 00:03:12.249 EAL: Detected lcore 26 as core 2 on socket 1 00:03:12.249 EAL: Detected lcore 27 as core 3 on socket 1 00:03:12.249 EAL: Detected lcore 28 as core 4 on socket 1 00:03:12.249 EAL: Detected lcore 29 as core 5 on socket 1 00:03:12.249 EAL: Detected lcore 30 as core 6 on socket 1 00:03:12.249 EAL: Detected lcore 31 as core 9 on socket 1 00:03:12.249 EAL: Detected lcore 32 as core 10 on socket 1 00:03:12.249 EAL: Detected lcore 33 as core 11 on socket 1 00:03:12.249 EAL: Detected lcore 34 as core 12 on socket 1 00:03:12.249 EAL: Detected lcore 35 as core 13 on socket 1 00:03:12.249 EAL: Detected lcore 36 as core 16 on socket 1 00:03:12.249 EAL: Detected lcore 37 as core 17 on socket 1 00:03:12.249 EAL: Detected lcore 38 as core 18 on socket 1 00:03:12.249 EAL: Detected lcore 39 as core 19 on socket 1 00:03:12.249 EAL: Detected lcore 40 as core 20 on socket 1 00:03:12.249 EAL: Detected lcore 41 as core 21 on socket 1 00:03:12.249 EAL: Detected lcore 42 as core 24 on socket 1 00:03:12.249 EAL: Detected lcore 43 as core 25 on socket 1 00:03:12.249 EAL: Detected lcore 44 as core 26 on socket 1 00:03:12.249 EAL: Detected lcore 45 as core 27 on socket 1 00:03:12.249 EAL: Detected lcore 46 as core 28 on socket 1 00:03:12.249 EAL: Detected lcore 47 as core 29 on socket 1 00:03:12.249 EAL: Detected lcore 48 as core 0 on socket 0 00:03:12.249 EAL: Detected lcore 49 as core 1 on socket 0 00:03:12.249 EAL: Detected lcore 50 as core 2 on socket 0 00:03:12.249 EAL: Detected lcore 51 as core 3 on socket 0 00:03:12.249 EAL: Detected lcore 52 as core 4 on socket 0 00:03:12.249 EAL: Detected lcore 53 as core 5 on socket 0 00:03:12.249 EAL: Detected lcore 54 as core 6 on socket 0 00:03:12.249 EAL: Detected lcore 55 as core 8 on socket 0 00:03:12.249 EAL: Detected lcore 56 as core 9 on socket 0 00:03:12.249 EAL: Detected lcore 57 as core 10 on socket 0 00:03:12.249 EAL: Detected lcore 58 as core 11 on socket 0 00:03:12.249 EAL: Detected lcore 59 as core 12 on socket 0 00:03:12.249 EAL: Detected lcore 60 as core 13 on socket 0 00:03:12.249 EAL: Detected lcore 61 as core 16 on socket 0 00:03:12.249 EAL: Detected lcore 62 as core 17 on socket 0 00:03:12.249 EAL: Detected lcore 63 as core 18 on socket 0 00:03:12.249 EAL: Detected lcore 64 as core 19 on socket 0 00:03:12.249 EAL: Detected lcore 65 as core 20 on socket 0 00:03:12.249 EAL: Detected lcore 66 as core 21 on socket 0 00:03:12.249 EAL: Detected lcore 67 as core 25 on socket 0 00:03:12.249 EAL: Detected lcore 68 as core 26 on socket 0 00:03:12.249 EAL: Detected lcore 69 as core 27 on socket 0 00:03:12.249 EAL: Detected lcore 70 as core 28 on socket 0 00:03:12.249 EAL: Detected lcore 71 as core 29 on socket 0 00:03:12.249 EAL: Detected lcore 72 as core 0 on socket 1 00:03:12.249 EAL: Detected lcore 73 as core 1 on socket 1 00:03:12.249 EAL: Detected lcore 74 as core 2 on socket 1 00:03:12.249 EAL: Detected lcore 75 as core 3 on socket 1 00:03:12.249 EAL: Detected lcore 76 as core 4 on socket 1 00:03:12.249 EAL: Detected lcore 77 as core 5 on socket 1 00:03:12.249 EAL: Detected lcore 78 as core 6 on socket 1 00:03:12.249 EAL: Detected lcore 79 as core 9 on socket 1 00:03:12.249 EAL: Detected lcore 80 as core 10 on socket 1 00:03:12.249 EAL: Detected lcore 81 as core 11 on socket 1 00:03:12.249 EAL: Detected lcore 82 as core 12 on socket 1 00:03:12.249 EAL: Detected lcore 83 as core 13 on socket 1 00:03:12.249 EAL: Detected lcore 84 as core 16 on socket 1 00:03:12.249 EAL: Detected lcore 85 as core 17 on socket 1 00:03:12.249 EAL: Detected lcore 86 as core 18 on socket 1 00:03:12.249 EAL: Detected lcore 87 as core 19 on socket 1 00:03:12.249 EAL: Detected lcore 88 as core 20 on socket 1 00:03:12.249 EAL: Detected lcore 89 as core 21 on socket 1 00:03:12.249 EAL: Detected lcore 90 as core 24 on socket 1 00:03:12.249 EAL: Detected lcore 91 as core 25 on socket 1 00:03:12.249 EAL: Detected lcore 92 as core 26 on socket 1 00:03:12.249 EAL: Detected lcore 93 as core 27 on socket 1 00:03:12.250 EAL: Detected lcore 94 as core 28 on socket 1 00:03:12.250 EAL: Detected lcore 95 as core 29 on socket 1 00:03:12.250 EAL: Maximum logical cores by configuration: 128 00:03:12.250 EAL: Detected CPU lcores: 96 00:03:12.250 EAL: Detected NUMA nodes: 2 00:03:12.250 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:12.250 EAL: Detected shared linkage of DPDK 00:03:12.250 EAL: No shared files mode enabled, IPC will be disabled 00:03:12.250 EAL: Bus pci wants IOVA as 'DC' 00:03:12.250 EAL: Buses did not request a specific IOVA mode. 00:03:12.250 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:12.250 EAL: Selected IOVA mode 'VA' 00:03:12.250 EAL: Probing VFIO support... 00:03:12.250 EAL: IOMMU type 1 (Type 1) is supported 00:03:12.250 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:12.250 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:12.250 EAL: VFIO support initialized 00:03:12.250 EAL: Ask a virtual area of 0x2e000 bytes 00:03:12.250 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:12.250 EAL: Setting up physically contiguous memory... 00:03:12.250 EAL: Setting maximum number of open files to 524288 00:03:12.250 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:12.250 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:12.250 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:12.250 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:12.250 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.250 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:12.250 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.250 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.250 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:12.250 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:12.250 EAL: Hugepages will be freed exactly as allocated. 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: TSC frequency is ~2300000 KHz 00:03:12.250 EAL: Main lcore 0 is ready (tid=7f17e3f67a00;cpuset=[0]) 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 0 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 2MB 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:12.250 EAL: Mem event callback 'spdk:(nil)' registered 00:03:12.250 00:03:12.250 00:03:12.250 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.250 http://cunit.sourceforge.net/ 00:03:12.250 00:03:12.250 00:03:12.250 Suite: components_suite 00:03:12.250 Test: vtophys_malloc_test ...passed 00:03:12.250 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 4MB 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was shrunk by 4MB 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 6MB 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was shrunk by 6MB 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 10MB 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was shrunk by 10MB 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 18MB 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was shrunk by 18MB 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 34MB 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was shrunk by 34MB 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 66MB 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was shrunk by 66MB 00:03:12.250 EAL: Trying to obtain current memory policy. 00:03:12.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.250 EAL: Restoring previous memory policy: 4 00:03:12.250 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.250 EAL: request: mp_malloc_sync 00:03:12.250 EAL: No shared files mode enabled, IPC is disabled 00:03:12.250 EAL: Heap on socket 0 was expanded by 130MB 00:03:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.509 EAL: request: mp_malloc_sync 00:03:12.509 EAL: No shared files mode enabled, IPC is disabled 00:03:12.509 EAL: Heap on socket 0 was shrunk by 130MB 00:03:12.509 EAL: Trying to obtain current memory policy. 00:03:12.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.509 EAL: Restoring previous memory policy: 4 00:03:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.509 EAL: request: mp_malloc_sync 00:03:12.509 EAL: No shared files mode enabled, IPC is disabled 00:03:12.509 EAL: Heap on socket 0 was expanded by 258MB 00:03:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.509 EAL: request: mp_malloc_sync 00:03:12.509 EAL: No shared files mode enabled, IPC is disabled 00:03:12.509 EAL: Heap on socket 0 was shrunk by 258MB 00:03:12.509 EAL: Trying to obtain current memory policy. 00:03:12.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.509 EAL: Restoring previous memory policy: 4 00:03:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.509 EAL: request: mp_malloc_sync 00:03:12.509 EAL: No shared files mode enabled, IPC is disabled 00:03:12.509 EAL: Heap on socket 0 was expanded by 514MB 00:03:12.768 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.768 EAL: request: mp_malloc_sync 00:03:12.768 EAL: No shared files mode enabled, IPC is disabled 00:03:12.768 EAL: Heap on socket 0 was shrunk by 514MB 00:03:12.768 EAL: Trying to obtain current memory policy. 00:03:12.768 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.027 EAL: Restoring previous memory policy: 4 00:03:13.027 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.027 EAL: request: mp_malloc_sync 00:03:13.027 EAL: No shared files mode enabled, IPC is disabled 00:03:13.027 EAL: Heap on socket 0 was expanded by 1026MB 00:03:13.027 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.286 EAL: request: mp_malloc_sync 00:03:13.286 EAL: No shared files mode enabled, IPC is disabled 00:03:13.286 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:13.286 passed 00:03:13.286 00:03:13.286 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.286 suites 1 1 n/a 0 0 00:03:13.286 tests 2 2 2 0 0 00:03:13.286 asserts 497 497 497 0 n/a 00:03:13.286 00:03:13.286 Elapsed time = 0.975 seconds 00:03:13.286 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.286 EAL: request: mp_malloc_sync 00:03:13.287 EAL: No shared files mode enabled, IPC is disabled 00:03:13.287 EAL: Heap on socket 0 was shrunk by 2MB 00:03:13.287 EAL: No shared files mode enabled, IPC is disabled 00:03:13.287 EAL: No shared files mode enabled, IPC is disabled 00:03:13.287 EAL: No shared files mode enabled, IPC is disabled 00:03:13.287 00:03:13.287 real 0m1.102s 00:03:13.287 user 0m0.649s 00:03:13.287 sys 0m0.428s 00:03:13.287 17:21:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:13.287 17:21:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:13.287 ************************************ 00:03:13.287 END TEST env_vtophys 00:03:13.287 ************************************ 00:03:13.287 17:21:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:13.287 17:21:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:13.287 17:21:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:13.287 17:21:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:13.287 ************************************ 00:03:13.287 START TEST env_pci 00:03:13.287 ************************************ 00:03:13.287 17:21:15 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:13.287 00:03:13.287 00:03:13.287 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.287 http://cunit.sourceforge.net/ 00:03:13.287 00:03:13.287 00:03:13.287 Suite: pci 00:03:13.287 Test: pci_hook ...[2024-11-19 17:21:15.477405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3258141 has claimed it 00:03:13.287 EAL: Cannot find device (10000:00:01.0) 00:03:13.287 EAL: Failed to attach device on primary process 00:03:13.287 passed 00:03:13.287 00:03:13.287 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.287 suites 1 1 n/a 0 0 00:03:13.287 tests 1 1 1 0 0 00:03:13.287 asserts 25 25 25 0 n/a 00:03:13.287 00:03:13.287 Elapsed time = 0.026 seconds 00:03:13.287 00:03:13.287 real 0m0.046s 00:03:13.287 user 0m0.012s 00:03:13.287 sys 0m0.033s 00:03:13.287 17:21:15 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:13.287 17:21:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:13.287 ************************************ 00:03:13.287 END TEST env_pci 00:03:13.287 ************************************ 00:03:13.546 17:21:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:13.546 17:21:15 env -- env/env.sh@15 -- # uname 00:03:13.546 17:21:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:13.546 17:21:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:13.546 17:21:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:13.546 17:21:15 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:13.546 17:21:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:13.546 17:21:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:13.546 ************************************ 00:03:13.546 START TEST env_dpdk_post_init 00:03:13.546 ************************************ 00:03:13.546 17:21:15 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:13.546 EAL: Detected CPU lcores: 96 00:03:13.546 EAL: Detected NUMA nodes: 2 00:03:13.546 EAL: Detected shared linkage of DPDK 00:03:13.546 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:13.546 EAL: Selected IOVA mode 'VA' 00:03:13.546 EAL: VFIO support initialized 00:03:13.546 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:13.546 EAL: Using IOMMU type 1 (Type 1) 00:03:13.546 EAL: Ignore mapping IO port bar(1) 00:03:13.546 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:13.546 EAL: Ignore mapping IO port bar(1) 00:03:13.546 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:13.546 EAL: Ignore mapping IO port bar(1) 00:03:13.546 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:13.546 EAL: Ignore mapping IO port bar(1) 00:03:13.546 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:13.806 EAL: Ignore mapping IO port bar(1) 00:03:13.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:13.806 EAL: Ignore mapping IO port bar(1) 00:03:13.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:13.806 EAL: Ignore mapping IO port bar(1) 00:03:13.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:13.806 EAL: Ignore mapping IO port bar(1) 00:03:13.806 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:14.374 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:14.374 EAL: Ignore mapping IO port bar(1) 00:03:14.374 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:14.374 EAL: Ignore mapping IO port bar(1) 00:03:14.374 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:14.374 EAL: Ignore mapping IO port bar(1) 00:03:14.374 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:14.634 EAL: Ignore mapping IO port bar(1) 00:03:14.634 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:14.634 EAL: Ignore mapping IO port bar(1) 00:03:14.634 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:14.634 EAL: Ignore mapping IO port bar(1) 00:03:14.634 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:14.634 EAL: Ignore mapping IO port bar(1) 00:03:14.634 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:14.634 EAL: Ignore mapping IO port bar(1) 00:03:14.634 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:17.928 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:17.928 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:17.928 Starting DPDK initialization... 00:03:17.928 Starting SPDK post initialization... 00:03:17.928 SPDK NVMe probe 00:03:17.928 Attaching to 0000:5e:00.0 00:03:17.928 Attached to 0000:5e:00.0 00:03:17.928 Cleaning up... 00:03:17.928 00:03:17.928 real 0m4.385s 00:03:17.928 user 0m3.006s 00:03:17.928 sys 0m0.453s 00:03:17.928 17:21:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.928 17:21:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:17.928 ************************************ 00:03:17.928 END TEST env_dpdk_post_init 00:03:17.928 ************************************ 00:03:17.928 17:21:20 env -- env/env.sh@26 -- # uname 00:03:17.928 17:21:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:17.928 17:21:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:17.928 17:21:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.928 17:21:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.928 17:21:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:17.928 ************************************ 00:03:17.928 START TEST env_mem_callbacks 00:03:17.928 ************************************ 00:03:17.928 17:21:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:17.928 EAL: Detected CPU lcores: 96 00:03:17.928 EAL: Detected NUMA nodes: 2 00:03:17.928 EAL: Detected shared linkage of DPDK 00:03:17.928 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:17.928 EAL: Selected IOVA mode 'VA' 00:03:17.928 EAL: VFIO support initialized 00:03:17.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:17.928 00:03:17.928 00:03:17.928 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.928 http://cunit.sourceforge.net/ 00:03:17.928 00:03:17.928 00:03:17.928 Suite: memory 00:03:17.928 Test: test ... 00:03:17.928 register 0x200000200000 2097152 00:03:17.928 malloc 3145728 00:03:17.928 register 0x200000400000 4194304 00:03:17.928 buf 0x200000500000 len 3145728 PASSED 00:03:17.928 malloc 64 00:03:17.928 buf 0x2000004fff40 len 64 PASSED 00:03:17.928 malloc 4194304 00:03:17.928 register 0x200000800000 6291456 00:03:17.928 buf 0x200000a00000 len 4194304 PASSED 00:03:17.928 free 0x200000500000 3145728 00:03:17.928 free 0x2000004fff40 64 00:03:17.928 unregister 0x200000400000 4194304 PASSED 00:03:17.928 free 0x200000a00000 4194304 00:03:17.928 unregister 0x200000800000 6291456 PASSED 00:03:17.928 malloc 8388608 00:03:17.928 register 0x200000400000 10485760 00:03:17.928 buf 0x200000600000 len 8388608 PASSED 00:03:17.928 free 0x200000600000 8388608 00:03:17.928 unregister 0x200000400000 10485760 PASSED 00:03:17.928 passed 00:03:17.928 00:03:17.928 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.928 suites 1 1 n/a 0 0 00:03:17.928 tests 1 1 1 0 0 00:03:17.928 asserts 15 15 15 0 n/a 00:03:17.928 00:03:17.928 Elapsed time = 0.008 seconds 00:03:17.928 00:03:17.928 real 0m0.062s 00:03:17.928 user 0m0.015s 00:03:17.928 sys 0m0.047s 00:03:17.928 17:21:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.928 17:21:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:17.928 ************************************ 00:03:17.928 END TEST env_mem_callbacks 00:03:17.928 ************************************ 00:03:17.928 00:03:17.928 real 0m6.286s 00:03:17.928 user 0m4.061s 00:03:17.928 sys 0m1.308s 00:03:17.928 17:21:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.928 17:21:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:17.928 ************************************ 00:03:17.928 END TEST env 00:03:17.928 ************************************ 00:03:18.188 17:21:20 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:18.188 17:21:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.188 17:21:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.188 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:03:18.188 ************************************ 00:03:18.188 START TEST rpc 00:03:18.188 ************************************ 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:18.188 * Looking for test storage... 00:03:18.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.188 17:21:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.188 17:21:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.188 17:21:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.188 17:21:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.188 17:21:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.188 17:21:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:18.188 17:21:20 rpc -- scripts/common.sh@345 -- # : 1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.188 17:21:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.188 17:21:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@353 -- # local d=1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.188 17:21:20 rpc -- scripts/common.sh@355 -- # echo 1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.188 17:21:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@353 -- # local d=2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.188 17:21:20 rpc -- scripts/common.sh@355 -- # echo 2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.188 17:21:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.188 17:21:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.188 17:21:20 rpc -- scripts/common.sh@368 -- # return 0 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.188 --rc genhtml_branch_coverage=1 00:03:18.188 --rc genhtml_function_coverage=1 00:03:18.188 --rc genhtml_legend=1 00:03:18.188 --rc geninfo_all_blocks=1 00:03:18.188 --rc geninfo_unexecuted_blocks=1 00:03:18.188 00:03:18.188 ' 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.188 --rc genhtml_branch_coverage=1 00:03:18.188 --rc genhtml_function_coverage=1 00:03:18.188 --rc genhtml_legend=1 00:03:18.188 --rc geninfo_all_blocks=1 00:03:18.188 --rc geninfo_unexecuted_blocks=1 00:03:18.188 00:03:18.188 ' 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.188 --rc genhtml_branch_coverage=1 00:03:18.188 --rc genhtml_function_coverage=1 00:03:18.188 --rc genhtml_legend=1 00:03:18.188 --rc geninfo_all_blocks=1 00:03:18.188 --rc geninfo_unexecuted_blocks=1 00:03:18.188 00:03:18.188 ' 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.188 --rc genhtml_branch_coverage=1 00:03:18.188 --rc genhtml_function_coverage=1 00:03:18.188 --rc genhtml_legend=1 00:03:18.188 --rc geninfo_all_blocks=1 00:03:18.188 --rc geninfo_unexecuted_blocks=1 00:03:18.188 00:03:18.188 ' 00:03:18.188 17:21:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3259041 00:03:18.188 17:21:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:18.188 17:21:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:18.188 17:21:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3259041 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 3259041 ']' 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:18.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:18.188 17:21:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.448 [2024-11-19 17:21:20.454137] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:18.448 [2024-11-19 17:21:20.454186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259041 ] 00:03:18.448 [2024-11-19 17:21:20.527994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.448 [2024-11-19 17:21:20.570088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:18.448 [2024-11-19 17:21:20.570127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3259041' to capture a snapshot of events at runtime. 00:03:18.448 [2024-11-19 17:21:20.570136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:18.448 [2024-11-19 17:21:20.570142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:18.448 [2024-11-19 17:21:20.570147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3259041 for offline analysis/debug. 00:03:18.448 [2024-11-19 17:21:20.570722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:18.708 17:21:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:18.708 17:21:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:18.708 17:21:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:18.708 17:21:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:18.708 17:21:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:18.708 17:21:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:18.708 17:21:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.708 17:21:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.708 17:21:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.708 ************************************ 00:03:18.708 START TEST rpc_integrity 00:03:18.708 ************************************ 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.708 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:18.708 { 00:03:18.708 "name": "Malloc0", 00:03:18.708 "aliases": [ 00:03:18.708 "d8f35e3a-08c6-47a3-9e31-3b667e8a8109" 00:03:18.708 ], 00:03:18.708 "product_name": "Malloc disk", 00:03:18.708 "block_size": 512, 00:03:18.708 "num_blocks": 16384, 00:03:18.708 "uuid": "d8f35e3a-08c6-47a3-9e31-3b667e8a8109", 00:03:18.708 "assigned_rate_limits": { 00:03:18.708 "rw_ios_per_sec": 0, 00:03:18.708 "rw_mbytes_per_sec": 0, 00:03:18.708 "r_mbytes_per_sec": 0, 00:03:18.708 "w_mbytes_per_sec": 0 00:03:18.708 }, 00:03:18.708 "claimed": false, 00:03:18.708 "zoned": false, 00:03:18.708 "supported_io_types": { 00:03:18.708 "read": true, 00:03:18.708 "write": true, 00:03:18.708 "unmap": true, 00:03:18.708 "flush": true, 00:03:18.708 "reset": true, 00:03:18.708 "nvme_admin": false, 00:03:18.708 "nvme_io": false, 00:03:18.708 "nvme_io_md": false, 00:03:18.708 "write_zeroes": true, 00:03:18.708 "zcopy": true, 00:03:18.708 "get_zone_info": false, 00:03:18.708 "zone_management": false, 00:03:18.708 "zone_append": false, 00:03:18.708 "compare": false, 00:03:18.708 "compare_and_write": false, 00:03:18.708 "abort": true, 00:03:18.708 "seek_hole": false, 00:03:18.708 "seek_data": false, 00:03:18.708 "copy": true, 00:03:18.708 "nvme_iov_md": false 00:03:18.708 }, 00:03:18.708 "memory_domains": [ 00:03:18.708 { 00:03:18.708 "dma_device_id": "system", 00:03:18.708 "dma_device_type": 1 00:03:18.708 }, 00:03:18.708 { 00:03:18.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:18.708 "dma_device_type": 2 00:03:18.708 } 00:03:18.708 ], 00:03:18.708 "driver_specific": {} 00:03:18.708 } 00:03:18.708 ]' 00:03:18.708 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:18.968 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:18.968 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:18.968 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.968 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.968 [2024-11-19 17:21:20.963936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:18.968 [2024-11-19 17:21:20.963973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:18.968 [2024-11-19 17:21:20.963986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe056e0 00:03:18.968 [2024-11-19 17:21:20.963992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:18.968 [2024-11-19 17:21:20.965116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:18.968 [2024-11-19 17:21:20.965137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:18.968 Passthru0 00:03:18.968 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.968 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:18.968 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.968 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.968 17:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.968 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:18.968 { 00:03:18.968 "name": "Malloc0", 00:03:18.968 "aliases": [ 00:03:18.968 "d8f35e3a-08c6-47a3-9e31-3b667e8a8109" 00:03:18.968 ], 00:03:18.968 "product_name": "Malloc disk", 00:03:18.968 "block_size": 512, 00:03:18.968 "num_blocks": 16384, 00:03:18.968 "uuid": "d8f35e3a-08c6-47a3-9e31-3b667e8a8109", 00:03:18.968 "assigned_rate_limits": { 00:03:18.968 "rw_ios_per_sec": 0, 00:03:18.968 "rw_mbytes_per_sec": 0, 00:03:18.968 "r_mbytes_per_sec": 0, 00:03:18.968 "w_mbytes_per_sec": 0 00:03:18.968 }, 00:03:18.968 "claimed": true, 00:03:18.968 "claim_type": "exclusive_write", 00:03:18.968 "zoned": false, 00:03:18.968 "supported_io_types": { 00:03:18.968 "read": true, 00:03:18.968 "write": true, 00:03:18.968 "unmap": true, 00:03:18.968 "flush": true, 00:03:18.968 "reset": true, 00:03:18.968 "nvme_admin": false, 00:03:18.968 "nvme_io": false, 00:03:18.968 "nvme_io_md": false, 00:03:18.968 "write_zeroes": true, 00:03:18.968 "zcopy": true, 00:03:18.968 "get_zone_info": false, 00:03:18.968 "zone_management": false, 00:03:18.968 "zone_append": false, 00:03:18.968 "compare": false, 00:03:18.968 "compare_and_write": false, 00:03:18.968 "abort": true, 00:03:18.968 "seek_hole": false, 00:03:18.968 "seek_data": false, 00:03:18.968 "copy": true, 00:03:18.968 "nvme_iov_md": false 00:03:18.968 }, 00:03:18.968 "memory_domains": [ 00:03:18.968 { 00:03:18.968 "dma_device_id": "system", 00:03:18.968 "dma_device_type": 1 00:03:18.968 }, 00:03:18.968 { 00:03:18.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:18.968 "dma_device_type": 2 00:03:18.968 } 00:03:18.968 ], 00:03:18.968 "driver_specific": {} 00:03:18.968 }, 00:03:18.968 { 00:03:18.968 "name": "Passthru0", 00:03:18.968 "aliases": [ 00:03:18.968 "820bc88c-3872-5fae-90fb-71fb0d3526cf" 00:03:18.968 ], 00:03:18.968 "product_name": "passthru", 00:03:18.968 "block_size": 512, 00:03:18.968 "num_blocks": 16384, 00:03:18.968 "uuid": "820bc88c-3872-5fae-90fb-71fb0d3526cf", 00:03:18.968 "assigned_rate_limits": { 00:03:18.968 "rw_ios_per_sec": 0, 00:03:18.968 "rw_mbytes_per_sec": 0, 00:03:18.968 "r_mbytes_per_sec": 0, 00:03:18.968 "w_mbytes_per_sec": 0 00:03:18.968 }, 00:03:18.968 "claimed": false, 00:03:18.968 "zoned": false, 00:03:18.968 "supported_io_types": { 00:03:18.968 "read": true, 00:03:18.968 "write": true, 00:03:18.968 "unmap": true, 00:03:18.968 "flush": true, 00:03:18.968 "reset": true, 00:03:18.968 "nvme_admin": false, 00:03:18.968 "nvme_io": false, 00:03:18.968 "nvme_io_md": false, 00:03:18.968 "write_zeroes": true, 00:03:18.968 "zcopy": true, 00:03:18.968 "get_zone_info": false, 00:03:18.968 "zone_management": false, 00:03:18.968 "zone_append": false, 00:03:18.968 "compare": false, 00:03:18.968 "compare_and_write": false, 00:03:18.968 "abort": true, 00:03:18.968 "seek_hole": false, 00:03:18.968 "seek_data": false, 00:03:18.968 "copy": true, 00:03:18.968 "nvme_iov_md": false 00:03:18.968 }, 00:03:18.968 "memory_domains": [ 00:03:18.968 { 00:03:18.968 "dma_device_id": "system", 00:03:18.968 "dma_device_type": 1 00:03:18.968 }, 00:03:18.968 { 00:03:18.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:18.968 "dma_device_type": 2 00:03:18.968 } 00:03:18.968 ], 00:03:18.968 "driver_specific": { 00:03:18.968 "passthru": { 00:03:18.968 "name": "Passthru0", 00:03:18.968 "base_bdev_name": "Malloc0" 00:03:18.968 } 00:03:18.968 } 00:03:18.968 } 00:03:18.968 ]' 00:03:18.968 17:21:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:18.968 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:18.969 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.969 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.969 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.969 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:18.969 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:18.969 17:21:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:18.969 00:03:18.969 real 0m0.277s 00:03:18.969 user 0m0.168s 00:03:18.969 sys 0m0.044s 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.969 17:21:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.969 ************************************ 00:03:18.969 END TEST rpc_integrity 00:03:18.969 ************************************ 00:03:18.969 17:21:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:18.969 17:21:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.969 17:21:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.969 17:21:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.969 ************************************ 00:03:18.969 START TEST rpc_plugins 00:03:18.969 ************************************ 00:03:18.969 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:18.969 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:18.969 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.969 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:19.228 { 00:03:19.228 "name": "Malloc1", 00:03:19.228 "aliases": [ 00:03:19.228 "9e5b9e81-e1b8-4560-82d8-cffdbf56fa81" 00:03:19.228 ], 00:03:19.228 "product_name": "Malloc disk", 00:03:19.228 "block_size": 4096, 00:03:19.228 "num_blocks": 256, 00:03:19.228 "uuid": "9e5b9e81-e1b8-4560-82d8-cffdbf56fa81", 00:03:19.228 "assigned_rate_limits": { 00:03:19.228 "rw_ios_per_sec": 0, 00:03:19.228 "rw_mbytes_per_sec": 0, 00:03:19.228 "r_mbytes_per_sec": 0, 00:03:19.228 "w_mbytes_per_sec": 0 00:03:19.228 }, 00:03:19.228 "claimed": false, 00:03:19.228 "zoned": false, 00:03:19.228 "supported_io_types": { 00:03:19.228 "read": true, 00:03:19.228 "write": true, 00:03:19.228 "unmap": true, 00:03:19.228 "flush": true, 00:03:19.228 "reset": true, 00:03:19.228 "nvme_admin": false, 00:03:19.228 "nvme_io": false, 00:03:19.228 "nvme_io_md": false, 00:03:19.228 "write_zeroes": true, 00:03:19.228 "zcopy": true, 00:03:19.228 "get_zone_info": false, 00:03:19.228 "zone_management": false, 00:03:19.228 "zone_append": false, 00:03:19.228 "compare": false, 00:03:19.228 "compare_and_write": false, 00:03:19.228 "abort": true, 00:03:19.228 "seek_hole": false, 00:03:19.228 "seek_data": false, 00:03:19.228 "copy": true, 00:03:19.228 "nvme_iov_md": false 00:03:19.228 }, 00:03:19.228 "memory_domains": [ 00:03:19.228 { 00:03:19.228 "dma_device_id": "system", 00:03:19.228 "dma_device_type": 1 00:03:19.228 }, 00:03:19.228 { 00:03:19.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.228 "dma_device_type": 2 00:03:19.228 } 00:03:19.228 ], 00:03:19.228 "driver_specific": {} 00:03:19.228 } 00:03:19.228 ]' 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:19.228 17:21:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:19.228 00:03:19.228 real 0m0.140s 00:03:19.228 user 0m0.083s 00:03:19.228 sys 0m0.023s 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.228 17:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 ************************************ 00:03:19.228 END TEST rpc_plugins 00:03:19.228 ************************************ 00:03:19.228 17:21:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:19.228 17:21:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:19.228 17:21:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.228 17:21:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 ************************************ 00:03:19.228 START TEST rpc_trace_cmd_test 00:03:19.228 ************************************ 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:19.228 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3259041", 00:03:19.228 "tpoint_group_mask": "0x8", 00:03:19.228 "iscsi_conn": { 00:03:19.228 "mask": "0x2", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "scsi": { 00:03:19.228 "mask": "0x4", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "bdev": { 00:03:19.228 "mask": "0x8", 00:03:19.228 "tpoint_mask": "0xffffffffffffffff" 00:03:19.228 }, 00:03:19.228 "nvmf_rdma": { 00:03:19.228 "mask": "0x10", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "nvmf_tcp": { 00:03:19.228 "mask": "0x20", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "ftl": { 00:03:19.228 "mask": "0x40", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "blobfs": { 00:03:19.228 "mask": "0x80", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "dsa": { 00:03:19.228 "mask": "0x200", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "thread": { 00:03:19.228 "mask": "0x400", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "nvme_pcie": { 00:03:19.228 "mask": "0x800", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "iaa": { 00:03:19.228 "mask": "0x1000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "nvme_tcp": { 00:03:19.228 "mask": "0x2000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "bdev_nvme": { 00:03:19.228 "mask": "0x4000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "sock": { 00:03:19.228 "mask": "0x8000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "blob": { 00:03:19.228 "mask": "0x10000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "bdev_raid": { 00:03:19.228 "mask": "0x20000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 }, 00:03:19.228 "scheduler": { 00:03:19.228 "mask": "0x40000", 00:03:19.228 "tpoint_mask": "0x0" 00:03:19.228 } 00:03:19.228 }' 00:03:19.228 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:19.488 00:03:19.488 real 0m0.196s 00:03:19.488 user 0m0.163s 00:03:19.488 sys 0m0.026s 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.488 17:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:19.488 ************************************ 00:03:19.488 END TEST rpc_trace_cmd_test 00:03:19.488 ************************************ 00:03:19.488 17:21:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:19.488 17:21:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:19.488 17:21:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:19.488 17:21:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:19.488 17:21:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.488 17:21:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.488 ************************************ 00:03:19.488 START TEST rpc_daemon_integrity 00:03:19.488 ************************************ 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:19.488 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:19.748 { 00:03:19.748 "name": "Malloc2", 00:03:19.748 "aliases": [ 00:03:19.748 "571dcaad-7f96-41e1-a16a-2b10f161b18d" 00:03:19.748 ], 00:03:19.748 "product_name": "Malloc disk", 00:03:19.748 "block_size": 512, 00:03:19.748 "num_blocks": 16384, 00:03:19.748 "uuid": "571dcaad-7f96-41e1-a16a-2b10f161b18d", 00:03:19.748 "assigned_rate_limits": { 00:03:19.748 "rw_ios_per_sec": 0, 00:03:19.748 "rw_mbytes_per_sec": 0, 00:03:19.748 "r_mbytes_per_sec": 0, 00:03:19.748 "w_mbytes_per_sec": 0 00:03:19.748 }, 00:03:19.748 "claimed": false, 00:03:19.748 "zoned": false, 00:03:19.748 "supported_io_types": { 00:03:19.748 "read": true, 00:03:19.748 "write": true, 00:03:19.748 "unmap": true, 00:03:19.748 "flush": true, 00:03:19.748 "reset": true, 00:03:19.748 "nvme_admin": false, 00:03:19.748 "nvme_io": false, 00:03:19.748 "nvme_io_md": false, 00:03:19.748 "write_zeroes": true, 00:03:19.748 "zcopy": true, 00:03:19.748 "get_zone_info": false, 00:03:19.748 "zone_management": false, 00:03:19.748 "zone_append": false, 00:03:19.748 "compare": false, 00:03:19.748 "compare_and_write": false, 00:03:19.748 "abort": true, 00:03:19.748 "seek_hole": false, 00:03:19.748 "seek_data": false, 00:03:19.748 "copy": true, 00:03:19.748 "nvme_iov_md": false 00:03:19.748 }, 00:03:19.748 "memory_domains": [ 00:03:19.748 { 00:03:19.748 "dma_device_id": "system", 00:03:19.748 "dma_device_type": 1 00:03:19.748 }, 00:03:19.748 { 00:03:19.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.748 "dma_device_type": 2 00:03:19.748 } 00:03:19.748 ], 00:03:19.748 "driver_specific": {} 00:03:19.748 } 00:03:19.748 ]' 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 [2024-11-19 17:21:21.786229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:19.748 [2024-11-19 17:21:21.786256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:19.748 [2024-11-19 17:21:21.786267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe95b70 00:03:19.748 [2024-11-19 17:21:21.786274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:19.748 [2024-11-19 17:21:21.787266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:19.748 [2024-11-19 17:21:21.787287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:19.748 Passthru0 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:19.748 { 00:03:19.748 "name": "Malloc2", 00:03:19.748 "aliases": [ 00:03:19.748 "571dcaad-7f96-41e1-a16a-2b10f161b18d" 00:03:19.748 ], 00:03:19.748 "product_name": "Malloc disk", 00:03:19.748 "block_size": 512, 00:03:19.748 "num_blocks": 16384, 00:03:19.748 "uuid": "571dcaad-7f96-41e1-a16a-2b10f161b18d", 00:03:19.748 "assigned_rate_limits": { 00:03:19.748 "rw_ios_per_sec": 0, 00:03:19.748 "rw_mbytes_per_sec": 0, 00:03:19.748 "r_mbytes_per_sec": 0, 00:03:19.748 "w_mbytes_per_sec": 0 00:03:19.748 }, 00:03:19.748 "claimed": true, 00:03:19.748 "claim_type": "exclusive_write", 00:03:19.748 "zoned": false, 00:03:19.748 "supported_io_types": { 00:03:19.748 "read": true, 00:03:19.748 "write": true, 00:03:19.748 "unmap": true, 00:03:19.748 "flush": true, 00:03:19.748 "reset": true, 00:03:19.748 "nvme_admin": false, 00:03:19.748 "nvme_io": false, 00:03:19.748 "nvme_io_md": false, 00:03:19.748 "write_zeroes": true, 00:03:19.748 "zcopy": true, 00:03:19.748 "get_zone_info": false, 00:03:19.748 "zone_management": false, 00:03:19.748 "zone_append": false, 00:03:19.748 "compare": false, 00:03:19.748 "compare_and_write": false, 00:03:19.748 "abort": true, 00:03:19.748 "seek_hole": false, 00:03:19.748 "seek_data": false, 00:03:19.748 "copy": true, 00:03:19.748 "nvme_iov_md": false 00:03:19.748 }, 00:03:19.748 "memory_domains": [ 00:03:19.748 { 00:03:19.748 "dma_device_id": "system", 00:03:19.748 "dma_device_type": 1 00:03:19.748 }, 00:03:19.748 { 00:03:19.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.748 "dma_device_type": 2 00:03:19.748 } 00:03:19.748 ], 00:03:19.748 "driver_specific": {} 00:03:19.748 }, 00:03:19.748 { 00:03:19.748 "name": "Passthru0", 00:03:19.748 "aliases": [ 00:03:19.748 "72aefe44-4103-53a7-8aaa-f550be5a6c41" 00:03:19.748 ], 00:03:19.748 "product_name": "passthru", 00:03:19.748 "block_size": 512, 00:03:19.748 "num_blocks": 16384, 00:03:19.748 "uuid": "72aefe44-4103-53a7-8aaa-f550be5a6c41", 00:03:19.748 "assigned_rate_limits": { 00:03:19.748 "rw_ios_per_sec": 0, 00:03:19.748 "rw_mbytes_per_sec": 0, 00:03:19.748 "r_mbytes_per_sec": 0, 00:03:19.748 "w_mbytes_per_sec": 0 00:03:19.748 }, 00:03:19.748 "claimed": false, 00:03:19.748 "zoned": false, 00:03:19.748 "supported_io_types": { 00:03:19.748 "read": true, 00:03:19.748 "write": true, 00:03:19.748 "unmap": true, 00:03:19.748 "flush": true, 00:03:19.748 "reset": true, 00:03:19.748 "nvme_admin": false, 00:03:19.748 "nvme_io": false, 00:03:19.748 "nvme_io_md": false, 00:03:19.748 "write_zeroes": true, 00:03:19.748 "zcopy": true, 00:03:19.748 "get_zone_info": false, 00:03:19.748 "zone_management": false, 00:03:19.748 "zone_append": false, 00:03:19.748 "compare": false, 00:03:19.748 "compare_and_write": false, 00:03:19.748 "abort": true, 00:03:19.748 "seek_hole": false, 00:03:19.748 "seek_data": false, 00:03:19.748 "copy": true, 00:03:19.748 "nvme_iov_md": false 00:03:19.748 }, 00:03:19.748 "memory_domains": [ 00:03:19.748 { 00:03:19.748 "dma_device_id": "system", 00:03:19.748 "dma_device_type": 1 00:03:19.748 }, 00:03:19.748 { 00:03:19.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:19.748 "dma_device_type": 2 00:03:19.748 } 00:03:19.748 ], 00:03:19.748 "driver_specific": { 00:03:19.748 "passthru": { 00:03:19.748 "name": "Passthru0", 00:03:19.748 "base_bdev_name": "Malloc2" 00:03:19.748 } 00:03:19.748 } 00:03:19.748 } 00:03:19.748 ]' 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:19.748 00:03:19.748 real 0m0.271s 00:03:19.748 user 0m0.179s 00:03:19.748 sys 0m0.035s 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.748 17:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:19.749 ************************************ 00:03:19.749 END TEST rpc_daemon_integrity 00:03:19.749 ************************************ 00:03:19.749 17:21:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:19.749 17:21:21 rpc -- rpc/rpc.sh@84 -- # killprocess 3259041 00:03:19.749 17:21:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 3259041 ']' 00:03:19.749 17:21:21 rpc -- common/autotest_common.sh@958 -- # kill -0 3259041 00:03:20.008 17:21:21 rpc -- common/autotest_common.sh@959 -- # uname 00:03:20.008 17:21:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:20.008 17:21:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259041 00:03:20.008 17:21:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:20.008 17:21:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:20.008 17:21:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259041' 00:03:20.008 killing process with pid 3259041 00:03:20.008 17:21:22 rpc -- common/autotest_common.sh@973 -- # kill 3259041 00:03:20.008 17:21:22 rpc -- common/autotest_common.sh@978 -- # wait 3259041 00:03:20.267 00:03:20.267 real 0m2.103s 00:03:20.267 user 0m2.623s 00:03:20.267 sys 0m0.743s 00:03:20.267 17:21:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.267 17:21:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:20.267 ************************************ 00:03:20.267 END TEST rpc 00:03:20.267 ************************************ 00:03:20.268 17:21:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:20.268 17:21:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.268 17:21:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.268 17:21:22 -- common/autotest_common.sh@10 -- # set +x 00:03:20.268 ************************************ 00:03:20.268 START TEST skip_rpc 00:03:20.268 ************************************ 00:03:20.268 17:21:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:20.268 * Looking for test storage... 00:03:20.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.527 17:21:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.527 --rc genhtml_branch_coverage=1 00:03:20.527 --rc genhtml_function_coverage=1 00:03:20.527 --rc genhtml_legend=1 00:03:20.527 --rc geninfo_all_blocks=1 00:03:20.527 --rc geninfo_unexecuted_blocks=1 00:03:20.527 00:03:20.527 ' 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.527 --rc genhtml_branch_coverage=1 00:03:20.527 --rc genhtml_function_coverage=1 00:03:20.527 --rc genhtml_legend=1 00:03:20.527 --rc geninfo_all_blocks=1 00:03:20.527 --rc geninfo_unexecuted_blocks=1 00:03:20.527 00:03:20.527 ' 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.527 --rc genhtml_branch_coverage=1 00:03:20.527 --rc genhtml_function_coverage=1 00:03:20.527 --rc genhtml_legend=1 00:03:20.527 --rc geninfo_all_blocks=1 00:03:20.527 --rc geninfo_unexecuted_blocks=1 00:03:20.527 00:03:20.527 ' 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.527 --rc genhtml_branch_coverage=1 00:03:20.527 --rc genhtml_function_coverage=1 00:03:20.527 --rc genhtml_legend=1 00:03:20.527 --rc geninfo_all_blocks=1 00:03:20.527 --rc geninfo_unexecuted_blocks=1 00:03:20.527 00:03:20.527 ' 00:03:20.527 17:21:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:20.527 17:21:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:20.527 17:21:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.527 17:21:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:20.527 ************************************ 00:03:20.527 START TEST skip_rpc 00:03:20.527 ************************************ 00:03:20.527 17:21:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:20.527 17:21:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3259676 00:03:20.527 17:21:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:20.527 17:21:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:20.527 17:21:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:20.527 [2024-11-19 17:21:22.661827] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:20.527 [2024-11-19 17:21:22.661864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259676 ] 00:03:20.527 [2024-11-19 17:21:22.732688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:20.787 [2024-11-19 17:21:22.773214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3259676 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3259676 ']' 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3259676 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259676 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259676' 00:03:26.105 killing process with pid 3259676 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3259676 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3259676 00:03:26.105 00:03:26.105 real 0m5.367s 00:03:26.105 user 0m5.127s 00:03:26.105 sys 0m0.276s 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.105 17:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.105 ************************************ 00:03:26.105 END TEST skip_rpc 00:03:26.105 ************************************ 00:03:26.105 17:21:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:26.105 17:21:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.105 17:21:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.105 17:21:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.106 ************************************ 00:03:26.106 START TEST skip_rpc_with_json 00:03:26.106 ************************************ 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3260615 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3260615 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3260615 ']' 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:26.106 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:26.106 [2024-11-19 17:21:28.099284] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:26.106 [2024-11-19 17:21:28.099337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260615 ] 00:03:26.106 [2024-11-19 17:21:28.156340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:26.106 [2024-11-19 17:21:28.198849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:26.467 [2024-11-19 17:21:28.425630] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:26.467 request: 00:03:26.467 { 00:03:26.467 "trtype": "tcp", 00:03:26.467 "method": "nvmf_get_transports", 00:03:26.467 "req_id": 1 00:03:26.467 } 00:03:26.467 Got JSON-RPC error response 00:03:26.467 response: 00:03:26.467 { 00:03:26.467 "code": -19, 00:03:26.467 "message": "No such device" 00:03:26.467 } 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:26.467 [2024-11-19 17:21:28.437735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.467 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:26.467 { 00:03:26.467 "subsystems": [ 00:03:26.467 { 00:03:26.467 "subsystem": "fsdev", 00:03:26.467 "config": [ 00:03:26.467 { 00:03:26.467 "method": "fsdev_set_opts", 00:03:26.467 "params": { 00:03:26.467 "fsdev_io_pool_size": 65535, 00:03:26.467 "fsdev_io_cache_size": 256 00:03:26.467 } 00:03:26.467 } 00:03:26.467 ] 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "vfio_user_target", 00:03:26.467 "config": null 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "keyring", 00:03:26.467 "config": [] 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "iobuf", 00:03:26.467 "config": [ 00:03:26.467 { 00:03:26.467 "method": "iobuf_set_options", 00:03:26.467 "params": { 00:03:26.467 "small_pool_count": 8192, 00:03:26.467 "large_pool_count": 1024, 00:03:26.467 "small_bufsize": 8192, 00:03:26.467 "large_bufsize": 135168, 00:03:26.467 "enable_numa": false 00:03:26.467 } 00:03:26.467 } 00:03:26.467 ] 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "sock", 00:03:26.467 "config": [ 00:03:26.467 { 00:03:26.467 "method": "sock_set_default_impl", 00:03:26.467 "params": { 00:03:26.467 "impl_name": "posix" 00:03:26.467 } 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "method": "sock_impl_set_options", 00:03:26.467 "params": { 00:03:26.467 "impl_name": "ssl", 00:03:26.467 "recv_buf_size": 4096, 00:03:26.467 "send_buf_size": 4096, 00:03:26.467 "enable_recv_pipe": true, 00:03:26.467 "enable_quickack": false, 00:03:26.467 "enable_placement_id": 0, 00:03:26.467 "enable_zerocopy_send_server": true, 00:03:26.467 "enable_zerocopy_send_client": false, 00:03:26.467 "zerocopy_threshold": 0, 00:03:26.467 "tls_version": 0, 00:03:26.467 "enable_ktls": false 00:03:26.467 } 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "method": "sock_impl_set_options", 00:03:26.467 "params": { 00:03:26.467 "impl_name": "posix", 00:03:26.467 "recv_buf_size": 2097152, 00:03:26.467 "send_buf_size": 2097152, 00:03:26.467 "enable_recv_pipe": true, 00:03:26.467 "enable_quickack": false, 00:03:26.467 "enable_placement_id": 0, 00:03:26.467 "enable_zerocopy_send_server": true, 00:03:26.467 "enable_zerocopy_send_client": false, 00:03:26.467 "zerocopy_threshold": 0, 00:03:26.467 "tls_version": 0, 00:03:26.467 "enable_ktls": false 00:03:26.467 } 00:03:26.467 } 00:03:26.467 ] 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "vmd", 00:03:26.467 "config": [] 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "accel", 00:03:26.467 "config": [ 00:03:26.467 { 00:03:26.467 "method": "accel_set_options", 00:03:26.467 "params": { 00:03:26.467 "small_cache_size": 128, 00:03:26.467 "large_cache_size": 16, 00:03:26.467 "task_count": 2048, 00:03:26.467 "sequence_count": 2048, 00:03:26.467 "buf_count": 2048 00:03:26.467 } 00:03:26.467 } 00:03:26.467 ] 00:03:26.467 }, 00:03:26.467 { 00:03:26.467 "subsystem": "bdev", 00:03:26.467 "config": [ 00:03:26.467 { 00:03:26.467 "method": "bdev_set_options", 00:03:26.467 "params": { 00:03:26.467 "bdev_io_pool_size": 65535, 00:03:26.467 "bdev_io_cache_size": 256, 00:03:26.467 "bdev_auto_examine": true, 00:03:26.467 "iobuf_small_cache_size": 128, 00:03:26.467 "iobuf_large_cache_size": 16 00:03:26.467 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "bdev_raid_set_options", 00:03:26.468 "params": { 00:03:26.468 "process_window_size_kb": 1024, 00:03:26.468 "process_max_bandwidth_mb_sec": 0 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "bdev_iscsi_set_options", 00:03:26.468 "params": { 00:03:26.468 "timeout_sec": 30 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "bdev_nvme_set_options", 00:03:26.468 "params": { 00:03:26.468 "action_on_timeout": "none", 00:03:26.468 "timeout_us": 0, 00:03:26.468 "timeout_admin_us": 0, 00:03:26.468 "keep_alive_timeout_ms": 10000, 00:03:26.468 "arbitration_burst": 0, 00:03:26.468 "low_priority_weight": 0, 00:03:26.468 "medium_priority_weight": 0, 00:03:26.468 "high_priority_weight": 0, 00:03:26.468 "nvme_adminq_poll_period_us": 10000, 00:03:26.468 "nvme_ioq_poll_period_us": 0, 00:03:26.468 "io_queue_requests": 0, 00:03:26.468 "delay_cmd_submit": true, 00:03:26.468 "transport_retry_count": 4, 00:03:26.468 "bdev_retry_count": 3, 00:03:26.468 "transport_ack_timeout": 0, 00:03:26.468 "ctrlr_loss_timeout_sec": 0, 00:03:26.468 "reconnect_delay_sec": 0, 00:03:26.468 "fast_io_fail_timeout_sec": 0, 00:03:26.468 "disable_auto_failback": false, 00:03:26.468 "generate_uuids": false, 00:03:26.468 "transport_tos": 0, 00:03:26.468 "nvme_error_stat": false, 00:03:26.468 "rdma_srq_size": 0, 00:03:26.468 "io_path_stat": false, 00:03:26.468 "allow_accel_sequence": false, 00:03:26.468 "rdma_max_cq_size": 0, 00:03:26.468 "rdma_cm_event_timeout_ms": 0, 00:03:26.468 "dhchap_digests": [ 00:03:26.468 "sha256", 00:03:26.468 "sha384", 00:03:26.468 "sha512" 00:03:26.468 ], 00:03:26.468 "dhchap_dhgroups": [ 00:03:26.468 "null", 00:03:26.468 "ffdhe2048", 00:03:26.468 "ffdhe3072", 00:03:26.468 "ffdhe4096", 00:03:26.468 "ffdhe6144", 00:03:26.468 "ffdhe8192" 00:03:26.468 ] 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "bdev_nvme_set_hotplug", 00:03:26.468 "params": { 00:03:26.468 "period_us": 100000, 00:03:26.468 "enable": false 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "bdev_wait_for_examine" 00:03:26.468 } 00:03:26.468 ] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "scsi", 00:03:26.468 "config": null 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "scheduler", 00:03:26.468 "config": [ 00:03:26.468 { 00:03:26.468 "method": "framework_set_scheduler", 00:03:26.468 "params": { 00:03:26.468 "name": "static" 00:03:26.468 } 00:03:26.468 } 00:03:26.468 ] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "vhost_scsi", 00:03:26.468 "config": [] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "vhost_blk", 00:03:26.468 "config": [] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "ublk", 00:03:26.468 "config": [] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "nbd", 00:03:26.468 "config": [] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "nvmf", 00:03:26.468 "config": [ 00:03:26.468 { 00:03:26.468 "method": "nvmf_set_config", 00:03:26.468 "params": { 00:03:26.468 "discovery_filter": "match_any", 00:03:26.468 "admin_cmd_passthru": { 00:03:26.468 "identify_ctrlr": false 00:03:26.468 }, 00:03:26.468 "dhchap_digests": [ 00:03:26.468 "sha256", 00:03:26.468 "sha384", 00:03:26.468 "sha512" 00:03:26.468 ], 00:03:26.468 "dhchap_dhgroups": [ 00:03:26.468 "null", 00:03:26.468 "ffdhe2048", 00:03:26.468 "ffdhe3072", 00:03:26.468 "ffdhe4096", 00:03:26.468 "ffdhe6144", 00:03:26.468 "ffdhe8192" 00:03:26.468 ] 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "nvmf_set_max_subsystems", 00:03:26.468 "params": { 00:03:26.468 "max_subsystems": 1024 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "nvmf_set_crdt", 00:03:26.468 "params": { 00:03:26.468 "crdt1": 0, 00:03:26.468 "crdt2": 0, 00:03:26.468 "crdt3": 0 00:03:26.468 } 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "method": "nvmf_create_transport", 00:03:26.468 "params": { 00:03:26.468 "trtype": "TCP", 00:03:26.468 "max_queue_depth": 128, 00:03:26.468 "max_io_qpairs_per_ctrlr": 127, 00:03:26.468 "in_capsule_data_size": 4096, 00:03:26.468 "max_io_size": 131072, 00:03:26.468 "io_unit_size": 131072, 00:03:26.468 "max_aq_depth": 128, 00:03:26.468 "num_shared_buffers": 511, 00:03:26.468 "buf_cache_size": 4294967295, 00:03:26.468 "dif_insert_or_strip": false, 00:03:26.468 "zcopy": false, 00:03:26.468 "c2h_success": true, 00:03:26.468 "sock_priority": 0, 00:03:26.468 "abort_timeout_sec": 1, 00:03:26.468 "ack_timeout": 0, 00:03:26.468 "data_wr_pool_size": 0 00:03:26.468 } 00:03:26.468 } 00:03:26.468 ] 00:03:26.468 }, 00:03:26.468 { 00:03:26.468 "subsystem": "iscsi", 00:03:26.468 "config": [ 00:03:26.468 { 00:03:26.468 "method": "iscsi_set_options", 00:03:26.468 "params": { 00:03:26.468 "node_base": "iqn.2016-06.io.spdk", 00:03:26.468 "max_sessions": 128, 00:03:26.468 "max_connections_per_session": 2, 00:03:26.468 "max_queue_depth": 64, 00:03:26.468 "default_time2wait": 2, 00:03:26.468 "default_time2retain": 20, 00:03:26.468 "first_burst_length": 8192, 00:03:26.468 "immediate_data": true, 00:03:26.468 "allow_duplicated_isid": false, 00:03:26.468 "error_recovery_level": 0, 00:03:26.468 "nop_timeout": 60, 00:03:26.468 "nop_in_interval": 30, 00:03:26.468 "disable_chap": false, 00:03:26.468 "require_chap": false, 00:03:26.468 "mutual_chap": false, 00:03:26.468 "chap_group": 0, 00:03:26.468 "max_large_datain_per_connection": 64, 00:03:26.468 "max_r2t_per_connection": 4, 00:03:26.468 "pdu_pool_size": 36864, 00:03:26.468 "immediate_data_pool_size": 16384, 00:03:26.468 "data_out_pool_size": 2048 00:03:26.468 } 00:03:26.468 } 00:03:26.469 ] 00:03:26.469 } 00:03:26.469 ] 00:03:26.469 } 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3260615 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3260615 ']' 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3260615 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3260615 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3260615' 00:03:26.469 killing process with pid 3260615 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3260615 00:03:26.469 17:21:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3260615 00:03:26.747 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3260660 00:03:26.747 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:26.748 17:21:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:32.023 17:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3260660 00:03:32.023 17:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3260660 ']' 00:03:32.023 17:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3260660 00:03:32.023 17:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:32.023 17:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.023 17:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3260660 00:03:32.023 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.023 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.023 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3260660' 00:03:32.023 killing process with pid 3260660 00:03:32.023 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3260660 00:03:32.023 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3260660 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:32.282 00:03:32.282 real 0m6.291s 00:03:32.282 user 0m6.001s 00:03:32.282 sys 0m0.585s 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.282 ************************************ 00:03:32.282 END TEST skip_rpc_with_json 00:03:32.282 ************************************ 00:03:32.282 17:21:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:32.282 17:21:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.282 17:21:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.282 17:21:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.282 ************************************ 00:03:32.282 START TEST skip_rpc_with_delay 00:03:32.282 ************************************ 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.282 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:32.283 [2024-11-19 17:21:34.457417] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:32.283 00:03:32.283 real 0m0.071s 00:03:32.283 user 0m0.054s 00:03:32.283 sys 0m0.016s 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.283 17:21:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:32.283 ************************************ 00:03:32.283 END TEST skip_rpc_with_delay 00:03:32.283 ************************************ 00:03:32.542 17:21:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:32.542 17:21:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:32.542 17:21:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:32.542 17:21:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.542 17:21:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.542 17:21:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.542 ************************************ 00:03:32.542 START TEST exit_on_failed_rpc_init 00:03:32.542 ************************************ 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3261690 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3261690 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3261690 ']' 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.542 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:32.542 [2024-11-19 17:21:34.592214] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:32.542 [2024-11-19 17:21:34.592256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261690 ] 00:03:32.542 [2024-11-19 17:21:34.668412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.542 [2024-11-19 17:21:34.713691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:32.802 17:21:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:32.802 [2024-11-19 17:21:34.984953] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:32.802 [2024-11-19 17:21:34.985000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261846 ] 00:03:33.060 [2024-11-19 17:21:35.060751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.060 [2024-11-19 17:21:35.101800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:33.060 [2024-11-19 17:21:35.101854] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:33.060 [2024-11-19 17:21:35.101863] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:33.060 [2024-11-19 17:21:35.101871] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3261690 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3261690 ']' 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3261690 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3261690 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3261690' 00:03:33.060 killing process with pid 3261690 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3261690 00:03:33.060 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3261690 00:03:33.318 00:03:33.318 real 0m0.957s 00:03:33.318 user 0m1.006s 00:03:33.318 sys 0m0.405s 00:03:33.318 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.318 17:21:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:33.318 ************************************ 00:03:33.318 END TEST exit_on_failed_rpc_init 00:03:33.318 ************************************ 00:03:33.318 17:21:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.318 00:03:33.318 real 0m13.138s 00:03:33.318 user 0m12.403s 00:03:33.318 sys 0m1.553s 00:03:33.318 17:21:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.318 17:21:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.318 ************************************ 00:03:33.318 END TEST skip_rpc 00:03:33.318 ************************************ 00:03:33.577 17:21:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:33.577 17:21:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.577 17:21:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.577 17:21:35 -- common/autotest_common.sh@10 -- # set +x 00:03:33.577 ************************************ 00:03:33.577 START TEST rpc_client 00:03:33.577 ************************************ 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:33.577 * Looking for test storage... 00:03:33.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.577 17:21:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.577 17:21:35 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.577 --rc genhtml_branch_coverage=1 00:03:33.577 --rc genhtml_function_coverage=1 00:03:33.577 --rc genhtml_legend=1 00:03:33.577 --rc geninfo_all_blocks=1 00:03:33.578 --rc geninfo_unexecuted_blocks=1 00:03:33.578 00:03:33.578 ' 00:03:33.578 17:21:35 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.578 --rc genhtml_branch_coverage=1 00:03:33.578 --rc genhtml_function_coverage=1 00:03:33.578 --rc genhtml_legend=1 00:03:33.578 --rc geninfo_all_blocks=1 00:03:33.578 --rc geninfo_unexecuted_blocks=1 00:03:33.578 00:03:33.578 ' 00:03:33.578 17:21:35 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.578 --rc genhtml_branch_coverage=1 00:03:33.578 --rc genhtml_function_coverage=1 00:03:33.578 --rc genhtml_legend=1 00:03:33.578 --rc geninfo_all_blocks=1 00:03:33.578 --rc geninfo_unexecuted_blocks=1 00:03:33.578 00:03:33.578 ' 00:03:33.578 17:21:35 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.578 --rc genhtml_branch_coverage=1 00:03:33.578 --rc genhtml_function_coverage=1 00:03:33.578 --rc genhtml_legend=1 00:03:33.578 --rc geninfo_all_blocks=1 00:03:33.578 --rc geninfo_unexecuted_blocks=1 00:03:33.578 00:03:33.578 ' 00:03:33.578 17:21:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:33.837 OK 00:03:33.837 17:21:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:33.837 00:03:33.837 real 0m0.201s 00:03:33.837 user 0m0.122s 00:03:33.837 sys 0m0.093s 00:03:33.837 17:21:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.837 17:21:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:33.837 ************************************ 00:03:33.837 END TEST rpc_client 00:03:33.837 ************************************ 00:03:33.837 17:21:35 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:33.837 17:21:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.837 17:21:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.837 17:21:35 -- common/autotest_common.sh@10 -- # set +x 00:03:33.837 ************************************ 00:03:33.837 START TEST json_config 00:03:33.837 ************************************ 00:03:33.837 17:21:35 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:33.837 17:21:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.837 17:21:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.837 17:21:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.837 17:21:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.837 17:21:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.837 17:21:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.837 17:21:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.837 17:21:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.837 17:21:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.837 17:21:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:33.837 17:21:36 json_config -- scripts/common.sh@345 -- # : 1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.837 17:21:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.837 17:21:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@353 -- # local d=1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.837 17:21:36 json_config -- scripts/common.sh@355 -- # echo 1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.837 17:21:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@353 -- # local d=2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.837 17:21:36 json_config -- scripts/common.sh@355 -- # echo 2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.837 17:21:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.837 17:21:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.837 17:21:36 json_config -- scripts/common.sh@368 -- # return 0 00:03:33.837 17:21:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.837 17:21:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.837 --rc genhtml_branch_coverage=1 00:03:33.837 --rc genhtml_function_coverage=1 00:03:33.837 --rc genhtml_legend=1 00:03:33.837 --rc geninfo_all_blocks=1 00:03:33.837 --rc geninfo_unexecuted_blocks=1 00:03:33.837 00:03:33.837 ' 00:03:33.837 17:21:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.837 --rc genhtml_branch_coverage=1 00:03:33.837 --rc genhtml_function_coverage=1 00:03:33.837 --rc genhtml_legend=1 00:03:33.837 --rc geninfo_all_blocks=1 00:03:33.837 --rc geninfo_unexecuted_blocks=1 00:03:33.837 00:03:33.837 ' 00:03:33.837 17:21:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.837 --rc genhtml_branch_coverage=1 00:03:33.837 --rc genhtml_function_coverage=1 00:03:33.837 --rc genhtml_legend=1 00:03:33.837 --rc geninfo_all_blocks=1 00:03:33.837 --rc geninfo_unexecuted_blocks=1 00:03:33.837 00:03:33.837 ' 00:03:33.837 17:21:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.837 --rc genhtml_branch_coverage=1 00:03:33.837 --rc genhtml_function_coverage=1 00:03:33.837 --rc genhtml_legend=1 00:03:33.837 --rc geninfo_all_blocks=1 00:03:33.837 --rc geninfo_unexecuted_blocks=1 00:03:33.837 00:03:33.837 ' 00:03:33.837 17:21:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.837 17:21:36 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:33.837 17:21:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.838 17:21:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.838 17:21:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.838 17:21:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.838 17:21:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.838 17:21:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.838 17:21:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.838 17:21:36 json_config -- paths/export.sh@5 -- # export PATH 00:03:33.838 17:21:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@51 -- # : 0 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.838 17:21:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.097 17:21:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.097 17:21:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:34.097 INFO: JSON configuration test init 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.097 17:21:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:34.097 17:21:36 json_config -- json_config/common.sh@9 -- # local app=target 00:03:34.097 17:21:36 json_config -- json_config/common.sh@10 -- # shift 00:03:34.097 17:21:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:34.097 17:21:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:34.097 17:21:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:34.097 17:21:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:34.097 17:21:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:34.097 17:21:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3262156 00:03:34.097 17:21:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:34.097 Waiting for target to run... 00:03:34.097 17:21:36 json_config -- json_config/common.sh@25 -- # waitforlisten 3262156 /var/tmp/spdk_tgt.sock 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 3262156 ']' 00:03:34.097 17:21:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:34.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:34.097 17:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.097 [2024-11-19 17:21:36.129912] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:34.097 [2024-11-19 17:21:36.129970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262156 ] 00:03:34.356 [2024-11-19 17:21:36.407977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.356 [2024-11-19 17:21:36.442137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.924 17:21:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.924 17:21:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:34.924 17:21:36 json_config -- json_config/common.sh@26 -- # echo '' 00:03:34.924 00:03:34.924 17:21:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:34.924 17:21:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:34.924 17:21:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.924 17:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.924 17:21:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:34.924 17:21:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:34.924 17:21:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.924 17:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.924 17:21:37 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:34.924 17:21:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:34.924 17:21:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:38.216 17:21:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.216 17:21:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:38.216 17:21:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@54 -- # sort 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:38.216 17:21:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.216 17:21:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:38.216 17:21:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.216 17:21:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:38.216 17:21:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:38.216 17:21:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:38.476 MallocForNvmf0 00:03:38.476 17:21:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:38.476 17:21:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:38.736 MallocForNvmf1 00:03:38.736 17:21:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:38.736 17:21:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:38.736 [2024-11-19 17:21:40.948977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:38.995 17:21:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:38.995 17:21:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:38.995 17:21:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:38.995 17:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:39.254 17:21:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:39.254 17:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:39.514 17:21:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:39.514 17:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:39.772 [2024-11-19 17:21:41.759530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:39.772 17:21:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:39.772 17:21:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:39.772 17:21:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.772 17:21:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:39.772 17:21:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:39.772 17:21:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.773 17:21:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:39.773 17:21:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:39.773 17:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:40.032 MallocBdevForConfigChangeCheck 00:03:40.032 17:21:42 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:40.032 17:21:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.032 17:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.032 17:21:42 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:40.032 17:21:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:40.292 17:21:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:40.292 INFO: shutting down applications... 00:03:40.292 17:21:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:40.292 17:21:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:40.292 17:21:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:40.292 17:21:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:42.197 Calling clear_iscsi_subsystem 00:03:42.197 Calling clear_nvmf_subsystem 00:03:42.197 Calling clear_nbd_subsystem 00:03:42.197 Calling clear_ublk_subsystem 00:03:42.197 Calling clear_vhost_blk_subsystem 00:03:42.197 Calling clear_vhost_scsi_subsystem 00:03:42.197 Calling clear_bdev_subsystem 00:03:42.197 17:21:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:42.197 17:21:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:42.197 17:21:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:42.197 17:21:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:42.197 17:21:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:42.197 17:21:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:42.197 17:21:44 json_config -- json_config/json_config.sh@352 -- # break 00:03:42.197 17:21:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:42.197 17:21:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:42.197 17:21:44 json_config -- json_config/common.sh@31 -- # local app=target 00:03:42.197 17:21:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:42.197 17:21:44 json_config -- json_config/common.sh@35 -- # [[ -n 3262156 ]] 00:03:42.197 17:21:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3262156 00:03:42.197 17:21:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:42.197 17:21:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:42.197 17:21:44 json_config -- json_config/common.sh@41 -- # kill -0 3262156 00:03:42.197 17:21:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:42.767 17:21:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:42.767 17:21:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:42.767 17:21:44 json_config -- json_config/common.sh@41 -- # kill -0 3262156 00:03:42.767 17:21:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:42.767 17:21:44 json_config -- json_config/common.sh@43 -- # break 00:03:42.767 17:21:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:42.767 17:21:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:42.767 SPDK target shutdown done 00:03:42.767 17:21:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:42.767 INFO: relaunching applications... 00:03:42.767 17:21:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:42.767 17:21:44 json_config -- json_config/common.sh@9 -- # local app=target 00:03:42.767 17:21:44 json_config -- json_config/common.sh@10 -- # shift 00:03:42.767 17:21:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:42.767 17:21:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:42.767 17:21:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:42.767 17:21:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.767 17:21:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.767 17:21:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3263710 00:03:42.767 17:21:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:42.767 Waiting for target to run... 00:03:42.767 17:21:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:42.767 17:21:44 json_config -- json_config/common.sh@25 -- # waitforlisten 3263710 /var/tmp/spdk_tgt.sock 00:03:42.767 17:21:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 3263710 ']' 00:03:42.767 17:21:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.767 17:21:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.767 17:21:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.767 17:21:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.767 17:21:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.767 [2024-11-19 17:21:44.917154] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:42.767 [2024-11-19 17:21:44.917212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263710 ] 00:03:43.336 [2024-11-19 17:21:45.366602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.336 [2024-11-19 17:21:45.424011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.627 [2024-11-19 17:21:48.460868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.627 [2024-11-19 17:21:48.493217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:47.195 17:21:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.195 17:21:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:47.195 17:21:49 json_config -- json_config/common.sh@26 -- # echo '' 00:03:47.195 00:03:47.195 17:21:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:47.195 17:21:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:47.195 INFO: Checking if target configuration is the same... 00:03:47.195 17:21:49 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.195 17:21:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:47.195 17:21:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:47.195 + '[' 2 -ne 2 ']' 00:03:47.195 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:47.195 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:47.195 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.195 +++ basename /dev/fd/62 00:03:47.195 ++ mktemp /tmp/62.XXX 00:03:47.195 + tmp_file_1=/tmp/62.of0 00:03:47.195 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.195 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:47.195 + tmp_file_2=/tmp/spdk_tgt_config.json.I4r 00:03:47.195 + ret=0 00:03:47.195 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.454 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.454 + diff -u /tmp/62.of0 /tmp/spdk_tgt_config.json.I4r 00:03:47.454 + echo 'INFO: JSON config files are the same' 00:03:47.454 INFO: JSON config files are the same 00:03:47.454 + rm /tmp/62.of0 /tmp/spdk_tgt_config.json.I4r 00:03:47.454 + exit 0 00:03:47.454 17:21:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:47.454 17:21:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:47.454 INFO: changing configuration and checking if this can be detected... 00:03:47.454 17:21:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:47.454 17:21:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:47.713 17:21:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.713 17:21:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:47.713 17:21:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:47.713 + '[' 2 -ne 2 ']' 00:03:47.713 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:47.713 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:47.713 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.713 +++ basename /dev/fd/62 00:03:47.713 ++ mktemp /tmp/62.XXX 00:03:47.713 + tmp_file_1=/tmp/62.qwE 00:03:47.713 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.713 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:47.713 + tmp_file_2=/tmp/spdk_tgt_config.json.ig1 00:03:47.713 + ret=0 00:03:47.713 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.972 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.972 + diff -u /tmp/62.qwE /tmp/spdk_tgt_config.json.ig1 00:03:47.972 + ret=1 00:03:47.972 + echo '=== Start of file: /tmp/62.qwE ===' 00:03:47.972 + cat /tmp/62.qwE 00:03:47.972 + echo '=== End of file: /tmp/62.qwE ===' 00:03:47.972 + echo '' 00:03:47.972 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ig1 ===' 00:03:47.972 + cat /tmp/spdk_tgt_config.json.ig1 00:03:47.972 + echo '=== End of file: /tmp/spdk_tgt_config.json.ig1 ===' 00:03:47.972 + echo '' 00:03:47.972 + rm /tmp/62.qwE /tmp/spdk_tgt_config.json.ig1 00:03:47.972 + exit 1 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:47.972 INFO: configuration change detected. 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:47.972 17:21:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.972 17:21:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 3263710 ]] 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:47.972 17:21:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:47.973 17:21:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.973 17:21:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.973 17:21:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:47.973 17:21:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:47.973 17:21:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:47.973 17:21:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:47.973 17:21:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:47.973 17:21:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:47.973 17:21:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.973 17:21:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.232 17:21:50 json_config -- json_config/json_config.sh@330 -- # killprocess 3263710 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@954 -- # '[' -z 3263710 ']' 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@958 -- # kill -0 3263710 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@959 -- # uname 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3263710 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3263710' 00:03:48.232 killing process with pid 3263710 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@973 -- # kill 3263710 00:03:48.232 17:21:50 json_config -- common/autotest_common.sh@978 -- # wait 3263710 00:03:49.611 17:21:51 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.611 17:21:51 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:49.611 17:21:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.611 17:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.611 17:21:51 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:49.611 17:21:51 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:49.611 INFO: Success 00:03:49.611 00:03:49.611 real 0m15.932s 00:03:49.611 user 0m16.607s 00:03:49.611 sys 0m2.595s 00:03:49.611 17:21:51 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.611 17:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.611 ************************************ 00:03:49.611 END TEST json_config 00:03:49.611 ************************************ 00:03:49.871 17:21:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:49.871 17:21:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.871 17:21:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.871 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:03:49.871 ************************************ 00:03:49.871 START TEST json_config_extra_key 00:03:49.871 ************************************ 00:03:49.871 17:21:51 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:49.871 17:21:51 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.871 17:21:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.871 17:21:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.871 17:21:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.871 17:21:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:49.871 17:21:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.871 17:21:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.871 --rc genhtml_branch_coverage=1 00:03:49.871 --rc genhtml_function_coverage=1 00:03:49.871 --rc genhtml_legend=1 00:03:49.871 --rc geninfo_all_blocks=1 00:03:49.871 --rc geninfo_unexecuted_blocks=1 00:03:49.871 00:03:49.871 ' 00:03:49.871 17:21:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.871 --rc genhtml_branch_coverage=1 00:03:49.871 --rc genhtml_function_coverage=1 00:03:49.871 --rc genhtml_legend=1 00:03:49.871 --rc geninfo_all_blocks=1 00:03:49.871 --rc geninfo_unexecuted_blocks=1 00:03:49.871 00:03:49.871 ' 00:03:49.871 17:21:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.871 --rc genhtml_branch_coverage=1 00:03:49.871 --rc genhtml_function_coverage=1 00:03:49.871 --rc genhtml_legend=1 00:03:49.871 --rc geninfo_all_blocks=1 00:03:49.871 --rc geninfo_unexecuted_blocks=1 00:03:49.871 00:03:49.871 ' 00:03:49.871 17:21:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.871 --rc genhtml_branch_coverage=1 00:03:49.871 --rc genhtml_function_coverage=1 00:03:49.871 --rc genhtml_legend=1 00:03:49.871 --rc geninfo_all_blocks=1 00:03:49.871 --rc geninfo_unexecuted_blocks=1 00:03:49.871 00:03:49.871 ' 00:03:49.871 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.871 17:21:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:49.872 17:21:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.872 17:21:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.872 17:21:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.872 17:21:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.872 17:21:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.872 17:21:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.872 17:21:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.872 17:21:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:49.872 17:21:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.872 17:21:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:49.872 INFO: launching applications... 00:03:49.872 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3264994 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:49.872 Waiting for target to run... 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3264994 /var/tmp/spdk_tgt.sock 00:03:49.872 17:21:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3264994 ']' 00:03:49.872 17:21:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:49.872 17:21:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:49.872 17:21:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.872 17:21:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:49.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:49.872 17:21:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.872 17:21:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:50.132 [2024-11-19 17:21:52.114594] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:50.132 [2024-11-19 17:21:52.114641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264994 ] 00:03:50.392 [2024-11-19 17:21:52.402937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.392 [2024-11-19 17:21:52.438009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.962 17:21:52 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.962 17:21:52 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:50.962 00:03:50.962 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:50.962 INFO: shutting down applications... 00:03:50.962 17:21:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3264994 ]] 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3264994 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3264994 00:03:50.962 17:21:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3264994 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:51.530 17:21:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:51.530 SPDK target shutdown done 00:03:51.530 17:21:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:51.530 Success 00:03:51.530 00:03:51.530 real 0m1.571s 00:03:51.530 user 0m1.352s 00:03:51.530 sys 0m0.399s 00:03:51.530 17:21:53 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.530 17:21:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:51.530 ************************************ 00:03:51.530 END TEST json_config_extra_key 00:03:51.530 ************************************ 00:03:51.530 17:21:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:51.530 17:21:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.530 17:21:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.530 17:21:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.530 ************************************ 00:03:51.530 START TEST alias_rpc 00:03:51.530 ************************************ 00:03:51.530 17:21:53 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:51.530 * Looking for test storage... 00:03:51.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.531 17:21:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:51.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.531 --rc genhtml_branch_coverage=1 00:03:51.531 --rc genhtml_function_coverage=1 00:03:51.531 --rc genhtml_legend=1 00:03:51.531 --rc geninfo_all_blocks=1 00:03:51.531 --rc geninfo_unexecuted_blocks=1 00:03:51.531 00:03:51.531 ' 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:51.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.531 --rc genhtml_branch_coverage=1 00:03:51.531 --rc genhtml_function_coverage=1 00:03:51.531 --rc genhtml_legend=1 00:03:51.531 --rc geninfo_all_blocks=1 00:03:51.531 --rc geninfo_unexecuted_blocks=1 00:03:51.531 00:03:51.531 ' 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:51.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.531 --rc genhtml_branch_coverage=1 00:03:51.531 --rc genhtml_function_coverage=1 00:03:51.531 --rc genhtml_legend=1 00:03:51.531 --rc geninfo_all_blocks=1 00:03:51.531 --rc geninfo_unexecuted_blocks=1 00:03:51.531 00:03:51.531 ' 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:51.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.531 --rc genhtml_branch_coverage=1 00:03:51.531 --rc genhtml_function_coverage=1 00:03:51.531 --rc genhtml_legend=1 00:03:51.531 --rc geninfo_all_blocks=1 00:03:51.531 --rc geninfo_unexecuted_blocks=1 00:03:51.531 00:03:51.531 ' 00:03:51.531 17:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:51.531 17:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3265340 00:03:51.531 17:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.531 17:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3265340 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3265340 ']' 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.531 17:21:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.790 [2024-11-19 17:21:53.754076] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:51.790 [2024-11-19 17:21:53.754129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265340 ] 00:03:51.790 [2024-11-19 17:21:53.830181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.790 [2024-11-19 17:21:53.872786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.049 17:21:54 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.049 17:21:54 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:52.049 17:21:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:52.307 17:21:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3265340 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3265340 ']' 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3265340 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3265340 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3265340' 00:03:52.307 killing process with pid 3265340 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@973 -- # kill 3265340 00:03:52.307 17:21:54 alias_rpc -- common/autotest_common.sh@978 -- # wait 3265340 00:03:52.566 00:03:52.566 real 0m1.139s 00:03:52.566 user 0m1.150s 00:03:52.566 sys 0m0.429s 00:03:52.566 17:21:54 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.566 17:21:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.567 ************************************ 00:03:52.567 END TEST alias_rpc 00:03:52.567 ************************************ 00:03:52.567 17:21:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:52.567 17:21:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:52.567 17:21:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.567 17:21:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.567 17:21:54 -- common/autotest_common.sh@10 -- # set +x 00:03:52.567 ************************************ 00:03:52.567 START TEST spdkcli_tcp 00:03:52.567 ************************************ 00:03:52.567 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:52.826 * Looking for test storage... 00:03:52.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.826 17:21:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.826 --rc genhtml_branch_coverage=1 00:03:52.826 --rc genhtml_function_coverage=1 00:03:52.826 --rc genhtml_legend=1 00:03:52.826 --rc geninfo_all_blocks=1 00:03:52.826 --rc geninfo_unexecuted_blocks=1 00:03:52.826 00:03:52.826 ' 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.826 --rc genhtml_branch_coverage=1 00:03:52.826 --rc genhtml_function_coverage=1 00:03:52.826 --rc genhtml_legend=1 00:03:52.826 --rc geninfo_all_blocks=1 00:03:52.826 --rc geninfo_unexecuted_blocks=1 00:03:52.826 00:03:52.826 ' 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.826 --rc genhtml_branch_coverage=1 00:03:52.826 --rc genhtml_function_coverage=1 00:03:52.826 --rc genhtml_legend=1 00:03:52.826 --rc geninfo_all_blocks=1 00:03:52.826 --rc geninfo_unexecuted_blocks=1 00:03:52.826 00:03:52.826 ' 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.826 --rc genhtml_branch_coverage=1 00:03:52.826 --rc genhtml_function_coverage=1 00:03:52.826 --rc genhtml_legend=1 00:03:52.826 --rc geninfo_all_blocks=1 00:03:52.826 --rc geninfo_unexecuted_blocks=1 00:03:52.826 00:03:52.826 ' 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3265573 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3265573 00:03:52.826 17:21:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3265573 ']' 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.826 17:21:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:52.826 [2024-11-19 17:21:54.965486] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:52.826 [2024-11-19 17:21:54.965534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265573 ] 00:03:52.826 [2024-11-19 17:21:55.044049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:53.085 [2024-11-19 17:21:55.088839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:53.085 [2024-11-19 17:21:55.088840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.653 17:21:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.653 17:21:55 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:53.653 17:21:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3265806 00:03:53.653 17:21:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:53.653 17:21:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:53.912 [ 00:03:53.912 "bdev_malloc_delete", 00:03:53.912 "bdev_malloc_create", 00:03:53.912 "bdev_null_resize", 00:03:53.912 "bdev_null_delete", 00:03:53.912 "bdev_null_create", 00:03:53.912 "bdev_nvme_cuse_unregister", 00:03:53.912 "bdev_nvme_cuse_register", 00:03:53.912 "bdev_opal_new_user", 00:03:53.912 "bdev_opal_set_lock_state", 00:03:53.912 "bdev_opal_delete", 00:03:53.912 "bdev_opal_get_info", 00:03:53.912 "bdev_opal_create", 00:03:53.912 "bdev_nvme_opal_revert", 00:03:53.912 "bdev_nvme_opal_init", 00:03:53.912 "bdev_nvme_send_cmd", 00:03:53.913 "bdev_nvme_set_keys", 00:03:53.913 "bdev_nvme_get_path_iostat", 00:03:53.913 "bdev_nvme_get_mdns_discovery_info", 00:03:53.913 "bdev_nvme_stop_mdns_discovery", 00:03:53.913 "bdev_nvme_start_mdns_discovery", 00:03:53.913 "bdev_nvme_set_multipath_policy", 00:03:53.913 "bdev_nvme_set_preferred_path", 00:03:53.913 "bdev_nvme_get_io_paths", 00:03:53.913 "bdev_nvme_remove_error_injection", 00:03:53.913 "bdev_nvme_add_error_injection", 00:03:53.913 "bdev_nvme_get_discovery_info", 00:03:53.913 "bdev_nvme_stop_discovery", 00:03:53.913 "bdev_nvme_start_discovery", 00:03:53.913 "bdev_nvme_get_controller_health_info", 00:03:53.913 "bdev_nvme_disable_controller", 00:03:53.913 "bdev_nvme_enable_controller", 00:03:53.913 "bdev_nvme_reset_controller", 00:03:53.913 "bdev_nvme_get_transport_statistics", 00:03:53.913 "bdev_nvme_apply_firmware", 00:03:53.913 "bdev_nvme_detach_controller", 00:03:53.913 "bdev_nvme_get_controllers", 00:03:53.913 "bdev_nvme_attach_controller", 00:03:53.913 "bdev_nvme_set_hotplug", 00:03:53.913 "bdev_nvme_set_options", 00:03:53.913 "bdev_passthru_delete", 00:03:53.913 "bdev_passthru_create", 00:03:53.913 "bdev_lvol_set_parent_bdev", 00:03:53.913 "bdev_lvol_set_parent", 00:03:53.913 "bdev_lvol_check_shallow_copy", 00:03:53.913 "bdev_lvol_start_shallow_copy", 00:03:53.913 "bdev_lvol_grow_lvstore", 00:03:53.913 "bdev_lvol_get_lvols", 00:03:53.913 "bdev_lvol_get_lvstores", 00:03:53.913 "bdev_lvol_delete", 00:03:53.913 "bdev_lvol_set_read_only", 00:03:53.913 "bdev_lvol_resize", 00:03:53.913 "bdev_lvol_decouple_parent", 00:03:53.913 "bdev_lvol_inflate", 00:03:53.913 "bdev_lvol_rename", 00:03:53.913 "bdev_lvol_clone_bdev", 00:03:53.913 "bdev_lvol_clone", 00:03:53.913 "bdev_lvol_snapshot", 00:03:53.913 "bdev_lvol_create", 00:03:53.913 "bdev_lvol_delete_lvstore", 00:03:53.913 "bdev_lvol_rename_lvstore", 00:03:53.913 "bdev_lvol_create_lvstore", 00:03:53.913 "bdev_raid_set_options", 00:03:53.913 "bdev_raid_remove_base_bdev", 00:03:53.913 "bdev_raid_add_base_bdev", 00:03:53.913 "bdev_raid_delete", 00:03:53.913 "bdev_raid_create", 00:03:53.913 "bdev_raid_get_bdevs", 00:03:53.913 "bdev_error_inject_error", 00:03:53.913 "bdev_error_delete", 00:03:53.913 "bdev_error_create", 00:03:53.913 "bdev_split_delete", 00:03:53.913 "bdev_split_create", 00:03:53.913 "bdev_delay_delete", 00:03:53.913 "bdev_delay_create", 00:03:53.913 "bdev_delay_update_latency", 00:03:53.913 "bdev_zone_block_delete", 00:03:53.913 "bdev_zone_block_create", 00:03:53.913 "blobfs_create", 00:03:53.913 "blobfs_detect", 00:03:53.913 "blobfs_set_cache_size", 00:03:53.913 "bdev_aio_delete", 00:03:53.913 "bdev_aio_rescan", 00:03:53.913 "bdev_aio_create", 00:03:53.913 "bdev_ftl_set_property", 00:03:53.913 "bdev_ftl_get_properties", 00:03:53.913 "bdev_ftl_get_stats", 00:03:53.913 "bdev_ftl_unmap", 00:03:53.913 "bdev_ftl_unload", 00:03:53.913 "bdev_ftl_delete", 00:03:53.913 "bdev_ftl_load", 00:03:53.913 "bdev_ftl_create", 00:03:53.913 "bdev_virtio_attach_controller", 00:03:53.913 "bdev_virtio_scsi_get_devices", 00:03:53.913 "bdev_virtio_detach_controller", 00:03:53.913 "bdev_virtio_blk_set_hotplug", 00:03:53.913 "bdev_iscsi_delete", 00:03:53.913 "bdev_iscsi_create", 00:03:53.913 "bdev_iscsi_set_options", 00:03:53.913 "accel_error_inject_error", 00:03:53.913 "ioat_scan_accel_module", 00:03:53.913 "dsa_scan_accel_module", 00:03:53.913 "iaa_scan_accel_module", 00:03:53.913 "vfu_virtio_create_fs_endpoint", 00:03:53.913 "vfu_virtio_create_scsi_endpoint", 00:03:53.913 "vfu_virtio_scsi_remove_target", 00:03:53.913 "vfu_virtio_scsi_add_target", 00:03:53.913 "vfu_virtio_create_blk_endpoint", 00:03:53.913 "vfu_virtio_delete_endpoint", 00:03:53.913 "keyring_file_remove_key", 00:03:53.913 "keyring_file_add_key", 00:03:53.913 "keyring_linux_set_options", 00:03:53.913 "fsdev_aio_delete", 00:03:53.913 "fsdev_aio_create", 00:03:53.913 "iscsi_get_histogram", 00:03:53.913 "iscsi_enable_histogram", 00:03:53.913 "iscsi_set_options", 00:03:53.913 "iscsi_get_auth_groups", 00:03:53.913 "iscsi_auth_group_remove_secret", 00:03:53.913 "iscsi_auth_group_add_secret", 00:03:53.913 "iscsi_delete_auth_group", 00:03:53.913 "iscsi_create_auth_group", 00:03:53.913 "iscsi_set_discovery_auth", 00:03:53.913 "iscsi_get_options", 00:03:53.913 "iscsi_target_node_request_logout", 00:03:53.913 "iscsi_target_node_set_redirect", 00:03:53.913 "iscsi_target_node_set_auth", 00:03:53.913 "iscsi_target_node_add_lun", 00:03:53.913 "iscsi_get_stats", 00:03:53.913 "iscsi_get_connections", 00:03:53.913 "iscsi_portal_group_set_auth", 00:03:53.913 "iscsi_start_portal_group", 00:03:53.913 "iscsi_delete_portal_group", 00:03:53.913 "iscsi_create_portal_group", 00:03:53.913 "iscsi_get_portal_groups", 00:03:53.913 "iscsi_delete_target_node", 00:03:53.913 "iscsi_target_node_remove_pg_ig_maps", 00:03:53.913 "iscsi_target_node_add_pg_ig_maps", 00:03:53.913 "iscsi_create_target_node", 00:03:53.913 "iscsi_get_target_nodes", 00:03:53.913 "iscsi_delete_initiator_group", 00:03:53.913 "iscsi_initiator_group_remove_initiators", 00:03:53.913 "iscsi_initiator_group_add_initiators", 00:03:53.913 "iscsi_create_initiator_group", 00:03:53.913 "iscsi_get_initiator_groups", 00:03:53.913 "nvmf_set_crdt", 00:03:53.913 "nvmf_set_config", 00:03:53.913 "nvmf_set_max_subsystems", 00:03:53.913 "nvmf_stop_mdns_prr", 00:03:53.913 "nvmf_publish_mdns_prr", 00:03:53.913 "nvmf_subsystem_get_listeners", 00:03:53.913 "nvmf_subsystem_get_qpairs", 00:03:53.913 "nvmf_subsystem_get_controllers", 00:03:53.913 "nvmf_get_stats", 00:03:53.913 "nvmf_get_transports", 00:03:53.913 "nvmf_create_transport", 00:03:53.913 "nvmf_get_targets", 00:03:53.913 "nvmf_delete_target", 00:03:53.913 "nvmf_create_target", 00:03:53.913 "nvmf_subsystem_allow_any_host", 00:03:53.913 "nvmf_subsystem_set_keys", 00:03:53.913 "nvmf_subsystem_remove_host", 00:03:53.913 "nvmf_subsystem_add_host", 00:03:53.913 "nvmf_ns_remove_host", 00:03:53.913 "nvmf_ns_add_host", 00:03:53.913 "nvmf_subsystem_remove_ns", 00:03:53.913 "nvmf_subsystem_set_ns_ana_group", 00:03:53.913 "nvmf_subsystem_add_ns", 00:03:53.913 "nvmf_subsystem_listener_set_ana_state", 00:03:53.913 "nvmf_discovery_get_referrals", 00:03:53.913 "nvmf_discovery_remove_referral", 00:03:53.913 "nvmf_discovery_add_referral", 00:03:53.913 "nvmf_subsystem_remove_listener", 00:03:53.913 "nvmf_subsystem_add_listener", 00:03:53.913 "nvmf_delete_subsystem", 00:03:53.913 "nvmf_create_subsystem", 00:03:53.913 "nvmf_get_subsystems", 00:03:53.913 "env_dpdk_get_mem_stats", 00:03:53.913 "nbd_get_disks", 00:03:53.913 "nbd_stop_disk", 00:03:53.913 "nbd_start_disk", 00:03:53.913 "ublk_recover_disk", 00:03:53.913 "ublk_get_disks", 00:03:53.913 "ublk_stop_disk", 00:03:53.913 "ublk_start_disk", 00:03:53.913 "ublk_destroy_target", 00:03:53.913 "ublk_create_target", 00:03:53.913 "virtio_blk_create_transport", 00:03:53.913 "virtio_blk_get_transports", 00:03:53.913 "vhost_controller_set_coalescing", 00:03:53.913 "vhost_get_controllers", 00:03:53.913 "vhost_delete_controller", 00:03:53.913 "vhost_create_blk_controller", 00:03:53.913 "vhost_scsi_controller_remove_target", 00:03:53.913 "vhost_scsi_controller_add_target", 00:03:53.913 "vhost_start_scsi_controller", 00:03:53.913 "vhost_create_scsi_controller", 00:03:53.913 "thread_set_cpumask", 00:03:53.913 "scheduler_set_options", 00:03:53.913 "framework_get_governor", 00:03:53.913 "framework_get_scheduler", 00:03:53.913 "framework_set_scheduler", 00:03:53.913 "framework_get_reactors", 00:03:53.913 "thread_get_io_channels", 00:03:53.913 "thread_get_pollers", 00:03:53.913 "thread_get_stats", 00:03:53.913 "framework_monitor_context_switch", 00:03:53.913 "spdk_kill_instance", 00:03:53.913 "log_enable_timestamps", 00:03:53.913 "log_get_flags", 00:03:53.913 "log_clear_flag", 00:03:53.913 "log_set_flag", 00:03:53.913 "log_get_level", 00:03:53.913 "log_set_level", 00:03:53.913 "log_get_print_level", 00:03:53.913 "log_set_print_level", 00:03:53.913 "framework_enable_cpumask_locks", 00:03:53.913 "framework_disable_cpumask_locks", 00:03:53.913 "framework_wait_init", 00:03:53.913 "framework_start_init", 00:03:53.913 "scsi_get_devices", 00:03:53.913 "bdev_get_histogram", 00:03:53.913 "bdev_enable_histogram", 00:03:53.913 "bdev_set_qos_limit", 00:03:53.913 "bdev_set_qd_sampling_period", 00:03:53.913 "bdev_get_bdevs", 00:03:53.913 "bdev_reset_iostat", 00:03:53.913 "bdev_get_iostat", 00:03:53.913 "bdev_examine", 00:03:53.913 "bdev_wait_for_examine", 00:03:53.913 "bdev_set_options", 00:03:53.913 "accel_get_stats", 00:03:53.913 "accel_set_options", 00:03:53.913 "accel_set_driver", 00:03:53.913 "accel_crypto_key_destroy", 00:03:53.913 "accel_crypto_keys_get", 00:03:53.913 "accel_crypto_key_create", 00:03:53.913 "accel_assign_opc", 00:03:53.913 "accel_get_module_info", 00:03:53.913 "accel_get_opc_assignments", 00:03:53.913 "vmd_rescan", 00:03:53.913 "vmd_remove_device", 00:03:53.913 "vmd_enable", 00:03:53.914 "sock_get_default_impl", 00:03:53.914 "sock_set_default_impl", 00:03:53.914 "sock_impl_set_options", 00:03:53.914 "sock_impl_get_options", 00:03:53.914 "iobuf_get_stats", 00:03:53.914 "iobuf_set_options", 00:03:53.914 "keyring_get_keys", 00:03:53.914 "vfu_tgt_set_base_path", 00:03:53.914 "framework_get_pci_devices", 00:03:53.914 "framework_get_config", 00:03:53.914 "framework_get_subsystems", 00:03:53.914 "fsdev_set_opts", 00:03:53.914 "fsdev_get_opts", 00:03:53.914 "trace_get_info", 00:03:53.914 "trace_get_tpoint_group_mask", 00:03:53.914 "trace_disable_tpoint_group", 00:03:53.914 "trace_enable_tpoint_group", 00:03:53.914 "trace_clear_tpoint_mask", 00:03:53.914 "trace_set_tpoint_mask", 00:03:53.914 "notify_get_notifications", 00:03:53.914 "notify_get_types", 00:03:53.914 "spdk_get_version", 00:03:53.914 "rpc_get_methods" 00:03:53.914 ] 00:03:53.914 17:21:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:53.914 17:21:55 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.914 17:21:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:53.914 17:21:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:53.914 17:21:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3265573 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3265573 ']' 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3265573 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3265573 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3265573' 00:03:53.914 killing process with pid 3265573 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3265573 00:03:53.914 17:21:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3265573 00:03:54.173 00:03:54.173 real 0m1.656s 00:03:54.173 user 0m3.048s 00:03:54.173 sys 0m0.506s 00:03:54.173 17:21:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.173 17:21:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:54.173 ************************************ 00:03:54.173 END TEST spdkcli_tcp 00:03:54.173 ************************************ 00:03:54.433 17:21:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:54.433 17:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.433 17:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.433 17:21:56 -- common/autotest_common.sh@10 -- # set +x 00:03:54.433 ************************************ 00:03:54.433 START TEST dpdk_mem_utility 00:03:54.433 ************************************ 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:54.433 * Looking for test storage... 00:03:54.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.433 17:21:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:54.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.433 --rc genhtml_branch_coverage=1 00:03:54.433 --rc genhtml_function_coverage=1 00:03:54.433 --rc genhtml_legend=1 00:03:54.433 --rc geninfo_all_blocks=1 00:03:54.433 --rc geninfo_unexecuted_blocks=1 00:03:54.433 00:03:54.433 ' 00:03:54.433 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:54.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.434 --rc genhtml_branch_coverage=1 00:03:54.434 --rc genhtml_function_coverage=1 00:03:54.434 --rc genhtml_legend=1 00:03:54.434 --rc geninfo_all_blocks=1 00:03:54.434 --rc geninfo_unexecuted_blocks=1 00:03:54.434 00:03:54.434 ' 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:54.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.434 --rc genhtml_branch_coverage=1 00:03:54.434 --rc genhtml_function_coverage=1 00:03:54.434 --rc genhtml_legend=1 00:03:54.434 --rc geninfo_all_blocks=1 00:03:54.434 --rc geninfo_unexecuted_blocks=1 00:03:54.434 00:03:54.434 ' 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:54.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.434 --rc genhtml_branch_coverage=1 00:03:54.434 --rc genhtml_function_coverage=1 00:03:54.434 --rc genhtml_legend=1 00:03:54.434 --rc geninfo_all_blocks=1 00:03:54.434 --rc geninfo_unexecuted_blocks=1 00:03:54.434 00:03:54.434 ' 00:03:54.434 17:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:54.434 17:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3266074 00:03:54.434 17:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.434 17:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3266074 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3266074 ']' 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:54.434 17:21:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:54.693 [2024-11-19 17:21:56.681513] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:54.693 [2024-11-19 17:21:56.681566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266074 ] 00:03:54.693 [2024-11-19 17:21:56.753858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.693 [2024-11-19 17:21:56.794064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.953 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.953 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:54.953 17:21:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:54.953 17:21:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:54.953 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.953 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:54.953 { 00:03:54.953 "filename": "/tmp/spdk_mem_dump.txt" 00:03:54.953 } 00:03:54.953 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.953 17:21:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:54.953 DPDK memory size 810.000000 MiB in 1 heap(s) 00:03:54.953 1 heaps totaling size 810.000000 MiB 00:03:54.953 size: 810.000000 MiB heap id: 0 00:03:54.953 end heaps---------- 00:03:54.953 9 mempools totaling size 595.772034 MiB 00:03:54.953 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:54.953 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:54.953 size: 92.545471 MiB name: bdev_io_3266074 00:03:54.953 size: 50.003479 MiB name: msgpool_3266074 00:03:54.953 size: 36.509338 MiB name: fsdev_io_3266074 00:03:54.953 size: 21.763794 MiB name: PDU_Pool 00:03:54.953 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:54.953 size: 4.133484 MiB name: evtpool_3266074 00:03:54.953 size: 0.026123 MiB name: Session_Pool 00:03:54.953 end mempools------- 00:03:54.953 6 memzones totaling size 4.142822 MiB 00:03:54.953 size: 1.000366 MiB name: RG_ring_0_3266074 00:03:54.953 size: 1.000366 MiB name: RG_ring_1_3266074 00:03:54.953 size: 1.000366 MiB name: RG_ring_4_3266074 00:03:54.953 size: 1.000366 MiB name: RG_ring_5_3266074 00:03:54.953 size: 0.125366 MiB name: RG_ring_2_3266074 00:03:54.953 size: 0.015991 MiB name: RG_ring_3_3266074 00:03:54.953 end memzones------- 00:03:54.953 17:21:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:54.953 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:54.953 list of free elements. size: 10.862488 MiB 00:03:54.953 element at address: 0x200018a00000 with size: 0.999878 MiB 00:03:54.953 element at address: 0x200018c00000 with size: 0.999878 MiB 00:03:54.953 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:54.953 element at address: 0x200031800000 with size: 0.994446 MiB 00:03:54.953 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:54.953 element at address: 0x200012c00000 with size: 0.954285 MiB 00:03:54.953 element at address: 0x200018e00000 with size: 0.936584 MiB 00:03:54.953 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:54.953 element at address: 0x20001a600000 with size: 0.582886 MiB 00:03:54.953 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:54.953 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:54.953 element at address: 0x200019000000 with size: 0.485657 MiB 00:03:54.953 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:54.953 element at address: 0x200027a00000 with size: 0.410034 MiB 00:03:54.953 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:54.953 list of standard malloc elements. size: 199.218628 MiB 00:03:54.953 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:54.953 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:54.953 element at address: 0x200018afff80 with size: 1.000122 MiB 00:03:54.953 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:03:54.953 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:54.953 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:54.953 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:03:54.953 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:54.953 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:03:54.953 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:54.953 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:54.953 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:54.953 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:54.953 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:54.953 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:54.953 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:54.953 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:54.953 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:54.953 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:54.953 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:54.953 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:54.954 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:54.954 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:54.954 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:54.954 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:54.954 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:54.954 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:03:54.954 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:03:54.954 element at address: 0x20001a695380 with size: 0.000183 MiB 00:03:54.954 element at address: 0x20001a695440 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200027a69040 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:03:54.954 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:03:54.954 list of memzone associated elements. size: 599.918884 MiB 00:03:54.954 element at address: 0x20001a695500 with size: 211.416748 MiB 00:03:54.954 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:54.954 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:03:54.954 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:54.954 element at address: 0x200012df4780 with size: 92.045044 MiB 00:03:54.954 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3266074_0 00:03:54.954 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:54.954 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3266074_0 00:03:54.954 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:54.954 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3266074_0 00:03:54.954 element at address: 0x2000191be940 with size: 20.255554 MiB 00:03:54.954 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:54.954 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:03:54.954 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:54.954 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:54.954 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3266074_0 00:03:54.954 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:54.954 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3266074 00:03:54.954 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:54.954 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3266074 00:03:54.954 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:54.954 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:54.954 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:03:54.954 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:54.954 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:54.954 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:54.954 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:54.954 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:54.954 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:54.954 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3266074 00:03:54.954 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:54.954 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3266074 00:03:54.954 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:03:54.954 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3266074 00:03:54.954 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:03:54.954 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3266074 00:03:54.954 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:54.954 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3266074 00:03:54.954 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:54.954 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3266074 00:03:54.954 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:54.954 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:54.954 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:54.954 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:54.954 element at address: 0x20001907c540 with size: 0.250488 MiB 00:03:54.954 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:54.954 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:54.954 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3266074 00:03:54.954 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:54.954 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3266074 00:03:54.954 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:54.954 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:54.954 element at address: 0x200027a69100 with size: 0.023743 MiB 00:03:54.954 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:54.954 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:54.954 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3266074 00:03:54.954 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:03:54.954 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:54.954 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:54.954 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3266074 00:03:54.954 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:54.954 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3266074 00:03:54.954 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:54.954 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3266074 00:03:54.954 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:03:54.954 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:54.954 17:21:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:54.954 17:21:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3266074 00:03:54.954 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3266074 ']' 00:03:54.954 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3266074 00:03:54.954 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:54.954 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.954 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266074 00:03:55.214 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.214 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.214 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266074' 00:03:55.214 killing process with pid 3266074 00:03:55.214 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3266074 00:03:55.214 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3266074 00:03:55.473 00:03:55.473 real 0m1.020s 00:03:55.473 user 0m0.962s 00:03:55.473 sys 0m0.414s 00:03:55.473 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.473 17:21:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:55.473 ************************************ 00:03:55.473 END TEST dpdk_mem_utility 00:03:55.473 ************************************ 00:03:55.473 17:21:57 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:55.473 17:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.473 17:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.473 17:21:57 -- common/autotest_common.sh@10 -- # set +x 00:03:55.473 ************************************ 00:03:55.473 START TEST event 00:03:55.473 ************************************ 00:03:55.473 17:21:57 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:55.473 * Looking for test storage... 00:03:55.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:55.473 17:21:57 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.473 17:21:57 event -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.473 17:21:57 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:55.733 17:21:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.733 17:21:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.733 17:21:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.733 17:21:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.733 17:21:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.733 17:21:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.733 17:21:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.733 17:21:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.733 17:21:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.733 17:21:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.733 17:21:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.733 17:21:57 event -- scripts/common.sh@344 -- # case "$op" in 00:03:55.733 17:21:57 event -- scripts/common.sh@345 -- # : 1 00:03:55.733 17:21:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.733 17:21:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.733 17:21:57 event -- scripts/common.sh@365 -- # decimal 1 00:03:55.733 17:21:57 event -- scripts/common.sh@353 -- # local d=1 00:03:55.733 17:21:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.733 17:21:57 event -- scripts/common.sh@355 -- # echo 1 00:03:55.733 17:21:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.733 17:21:57 event -- scripts/common.sh@366 -- # decimal 2 00:03:55.733 17:21:57 event -- scripts/common.sh@353 -- # local d=2 00:03:55.733 17:21:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.733 17:21:57 event -- scripts/common.sh@355 -- # echo 2 00:03:55.733 17:21:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.733 17:21:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.733 17:21:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.733 17:21:57 event -- scripts/common.sh@368 -- # return 0 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.733 --rc genhtml_branch_coverage=1 00:03:55.733 --rc genhtml_function_coverage=1 00:03:55.733 --rc genhtml_legend=1 00:03:55.733 --rc geninfo_all_blocks=1 00:03:55.733 --rc geninfo_unexecuted_blocks=1 00:03:55.733 00:03:55.733 ' 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.733 --rc genhtml_branch_coverage=1 00:03:55.733 --rc genhtml_function_coverage=1 00:03:55.733 --rc genhtml_legend=1 00:03:55.733 --rc geninfo_all_blocks=1 00:03:55.733 --rc geninfo_unexecuted_blocks=1 00:03:55.733 00:03:55.733 ' 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.733 --rc genhtml_branch_coverage=1 00:03:55.733 --rc genhtml_function_coverage=1 00:03:55.733 --rc genhtml_legend=1 00:03:55.733 --rc geninfo_all_blocks=1 00:03:55.733 --rc geninfo_unexecuted_blocks=1 00:03:55.733 00:03:55.733 ' 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.733 --rc genhtml_branch_coverage=1 00:03:55.733 --rc genhtml_function_coverage=1 00:03:55.733 --rc genhtml_legend=1 00:03:55.733 --rc geninfo_all_blocks=1 00:03:55.733 --rc geninfo_unexecuted_blocks=1 00:03:55.733 00:03:55.733 ' 00:03:55.733 17:21:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:55.733 17:21:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:55.733 17:21:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:55.733 17:21:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.733 17:21:57 event -- common/autotest_common.sh@10 -- # set +x 00:03:55.733 ************************************ 00:03:55.733 START TEST event_perf 00:03:55.733 ************************************ 00:03:55.733 17:21:57 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:55.733 Running I/O for 1 seconds...[2024-11-19 17:21:57.780369] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:55.733 [2024-11-19 17:21:57.780440] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266214 ] 00:03:55.733 [2024-11-19 17:21:57.857623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:55.733 [2024-11-19 17:21:57.903752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.733 [2024-11-19 17:21:57.903861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:55.733 [2024-11-19 17:21:57.903982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.733 [2024-11-19 17:21:57.903983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:57.113 Running I/O for 1 seconds... 00:03:57.113 lcore 0: 201539 00:03:57.113 lcore 1: 201537 00:03:57.113 lcore 2: 201537 00:03:57.113 lcore 3: 201538 00:03:57.113 done. 00:03:57.113 00:03:57.113 real 0m1.185s 00:03:57.113 user 0m4.099s 00:03:57.113 sys 0m0.083s 00:03:57.113 17:21:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.113 17:21:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:57.113 ************************************ 00:03:57.113 END TEST event_perf 00:03:57.113 ************************************ 00:03:57.113 17:21:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:57.113 17:21:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:57.113 17:21:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.113 17:21:58 event -- common/autotest_common.sh@10 -- # set +x 00:03:57.113 ************************************ 00:03:57.113 START TEST event_reactor 00:03:57.113 ************************************ 00:03:57.113 17:21:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:57.113 [2024-11-19 17:21:59.037766] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:57.113 [2024-11-19 17:21:59.037840] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266450 ] 00:03:57.113 [2024-11-19 17:21:59.118098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.113 [2024-11-19 17:21:59.159736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.053 test_start 00:03:58.053 oneshot 00:03:58.053 tick 100 00:03:58.053 tick 100 00:03:58.053 tick 250 00:03:58.053 tick 100 00:03:58.053 tick 100 00:03:58.053 tick 250 00:03:58.053 tick 100 00:03:58.053 tick 500 00:03:58.053 tick 100 00:03:58.053 tick 100 00:03:58.053 tick 250 00:03:58.053 tick 100 00:03:58.053 tick 100 00:03:58.053 test_end 00:03:58.053 00:03:58.053 real 0m1.185s 00:03:58.053 user 0m1.111s 00:03:58.053 sys 0m0.069s 00:03:58.053 17:22:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.053 17:22:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:58.053 ************************************ 00:03:58.053 END TEST event_reactor 00:03:58.053 ************************************ 00:03:58.053 17:22:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:58.053 17:22:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:58.053 17:22:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.053 17:22:00 event -- common/autotest_common.sh@10 -- # set +x 00:03:58.053 ************************************ 00:03:58.053 START TEST event_reactor_perf 00:03:58.053 ************************************ 00:03:58.053 17:22:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:58.311 [2024-11-19 17:22:00.293166] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:58.311 [2024-11-19 17:22:00.293234] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266696 ] 00:03:58.311 [2024-11-19 17:22:00.370691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.311 [2024-11-19 17:22:00.411137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.247 test_start 00:03:59.247 test_end 00:03:59.247 Performance: 496702 events per second 00:03:59.247 00:03:59.247 real 0m1.176s 00:03:59.247 user 0m1.094s 00:03:59.247 sys 0m0.077s 00:03:59.247 17:22:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.247 17:22:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:59.247 ************************************ 00:03:59.247 END TEST event_reactor_perf 00:03:59.247 ************************************ 00:03:59.506 17:22:01 event -- event/event.sh@49 -- # uname -s 00:03:59.506 17:22:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:59.506 17:22:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:59.506 17:22:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.506 17:22:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.506 17:22:01 event -- common/autotest_common.sh@10 -- # set +x 00:03:59.506 ************************************ 00:03:59.506 START TEST event_scheduler 00:03:59.506 ************************************ 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:59.506 * Looking for test storage... 00:03:59.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.506 17:22:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.506 --rc genhtml_branch_coverage=1 00:03:59.506 --rc genhtml_function_coverage=1 00:03:59.506 --rc genhtml_legend=1 00:03:59.506 --rc geninfo_all_blocks=1 00:03:59.506 --rc geninfo_unexecuted_blocks=1 00:03:59.506 00:03:59.506 ' 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.506 --rc genhtml_branch_coverage=1 00:03:59.506 --rc genhtml_function_coverage=1 00:03:59.506 --rc genhtml_legend=1 00:03:59.506 --rc geninfo_all_blocks=1 00:03:59.506 --rc geninfo_unexecuted_blocks=1 00:03:59.506 00:03:59.506 ' 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.506 --rc genhtml_branch_coverage=1 00:03:59.506 --rc genhtml_function_coverage=1 00:03:59.506 --rc genhtml_legend=1 00:03:59.506 --rc geninfo_all_blocks=1 00:03:59.506 --rc geninfo_unexecuted_blocks=1 00:03:59.506 00:03:59.506 ' 00:03:59.506 17:22:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.507 --rc genhtml_branch_coverage=1 00:03:59.507 --rc genhtml_function_coverage=1 00:03:59.507 --rc genhtml_legend=1 00:03:59.507 --rc geninfo_all_blocks=1 00:03:59.507 --rc geninfo_unexecuted_blocks=1 00:03:59.507 00:03:59.507 ' 00:03:59.507 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:59.507 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3266987 00:03:59.507 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.507 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:59.507 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3266987 00:03:59.507 17:22:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3266987 ']' 00:03:59.507 17:22:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.507 17:22:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.507 17:22:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.507 17:22:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.507 17:22:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:59.766 [2024-11-19 17:22:01.743093] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:03:59.766 [2024-11-19 17:22:01.743140] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266987 ] 00:03:59.766 [2024-11-19 17:22:01.801023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:59.766 [2024-11-19 17:22:01.844667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.766 [2024-11-19 17:22:01.844775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.766 [2024-11-19 17:22:01.844883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:59.766 [2024-11-19 17:22:01.844883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:59.766 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:59.766 [2024-11-19 17:22:01.909556] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:59.766 [2024-11-19 17:22:01.909573] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:59.766 [2024-11-19 17:22:01.909582] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:59.766 [2024-11-19 17:22:01.909588] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:59.766 [2024-11-19 17:22:01.909593] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.766 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:59.766 [2024-11-19 17:22:01.984244] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.766 17:22:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.766 17:22:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 ************************************ 00:04:00.025 START TEST scheduler_create_thread 00:04:00.025 ************************************ 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 2 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 3 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 4 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 5 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 6 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 7 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 8 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 9 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.025 10 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:00.025 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.026 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.026 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.026 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:00.026 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:00.026 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.026 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.602 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.602 17:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:00.602 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.602 17:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.981 17:22:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.981 17:22:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:01.981 17:22:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:01.981 17:22:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.981 17:22:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.918 17:22:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.918 00:04:02.918 real 0m3.101s 00:04:02.918 user 0m0.024s 00:04:02.918 sys 0m0.006s 00:04:02.918 17:22:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.918 17:22:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.918 ************************************ 00:04:02.918 END TEST scheduler_create_thread 00:04:02.918 ************************************ 00:04:03.177 17:22:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:03.177 17:22:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3266987 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3266987 ']' 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3266987 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266987 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266987' 00:04:03.177 killing process with pid 3266987 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3266987 00:04:03.177 17:22:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3266987 00:04:03.436 [2024-11-19 17:22:05.499370] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:03.695 00:04:03.695 real 0m4.158s 00:04:03.695 user 0m6.715s 00:04:03.695 sys 0m0.365s 00:04:03.696 17:22:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.696 17:22:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.696 ************************************ 00:04:03.696 END TEST event_scheduler 00:04:03.696 ************************************ 00:04:03.696 17:22:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:03.696 17:22:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:03.696 17:22:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.696 17:22:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.696 17:22:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.696 ************************************ 00:04:03.696 START TEST app_repeat 00:04:03.696 ************************************ 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3267727 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3267727' 00:04:03.696 Process app_repeat pid: 3267727 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:03.696 spdk_app_start Round 0 00:04:03.696 17:22:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3267727 /var/tmp/spdk-nbd.sock 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3267727 ']' 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:03.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.696 17:22:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:03.696 [2024-11-19 17:22:05.798761] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:03.696 [2024-11-19 17:22:05.798819] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267727 ] 00:04:03.696 [2024-11-19 17:22:05.877435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.955 [2024-11-19 17:22:05.920175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.955 [2024-11-19 17:22:05.920176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.955 17:22:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.955 17:22:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:03.955 17:22:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:04.215 Malloc0 00:04:04.215 17:22:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:04.215 Malloc1 00:04:04.473 17:22:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:04.474 /dev/nbd0 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:04.474 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:04.474 17:22:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:04.474 1+0 records in 00:04:04.474 1+0 records out 00:04:04.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228062 s, 18.0 MB/s 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:04.733 /dev/nbd1 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:04.733 1+0 records in 00:04:04.733 1+0 records out 00:04:04.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231575 s, 17.7 MB/s 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:04.733 17:22:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:04.733 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.993 17:22:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:04.993 17:22:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.993 17:22:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:04.993 { 00:04:04.993 "nbd_device": "/dev/nbd0", 00:04:04.993 "bdev_name": "Malloc0" 00:04:04.993 }, 00:04:04.993 { 00:04:04.993 "nbd_device": "/dev/nbd1", 00:04:04.993 "bdev_name": "Malloc1" 00:04:04.993 } 00:04:04.993 ]' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:04.993 { 00:04:04.993 "nbd_device": "/dev/nbd0", 00:04:04.993 "bdev_name": "Malloc0" 00:04:04.993 }, 00:04:04.993 { 00:04:04.993 "nbd_device": "/dev/nbd1", 00:04:04.993 "bdev_name": "Malloc1" 00:04:04.993 } 00:04:04.993 ]' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:04.993 /dev/nbd1' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:04.993 /dev/nbd1' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:04.993 17:22:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:04.993 256+0 records in 00:04:04.993 256+0 records out 00:04:04.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100365 s, 104 MB/s 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:05.252 256+0 records in 00:04:05.252 256+0 records out 00:04:05.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144734 s, 72.4 MB/s 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:05.252 256+0 records in 00:04:05.252 256+0 records out 00:04:05.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153332 s, 68.4 MB/s 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:05.252 17:22:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.512 17:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:05.771 17:22:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:05.771 17:22:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:06.030 17:22:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:06.289 [2024-11-19 17:22:08.324745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:06.289 [2024-11-19 17:22:08.362296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.289 [2024-11-19 17:22:08.362297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.289 [2024-11-19 17:22:08.403391] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:06.289 [2024-11-19 17:22:08.403439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:09.580 17:22:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:09.580 17:22:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:09.580 spdk_app_start Round 1 00:04:09.580 17:22:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3267727 /var/tmp/spdk-nbd.sock 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3267727 ']' 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:09.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.580 17:22:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:09.580 17:22:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:09.580 Malloc0 00:04:09.580 17:22:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:09.580 Malloc1 00:04:09.840 17:22:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:09.840 17:22:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:09.840 /dev/nbd0 00:04:09.840 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:09.840 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:09.840 17:22:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:09.840 17:22:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:09.840 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:09.840 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:09.840 17:22:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:10.100 1+0 records in 00:04:10.100 1+0 records out 00:04:10.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228556 s, 17.9 MB/s 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:10.100 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:10.100 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.100 17:22:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:10.100 /dev/nbd1 00:04:10.100 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:10.100 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:10.100 17:22:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:10.101 1+0 records in 00:04:10.101 1+0 records out 00:04:10.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200753 s, 20.4 MB/s 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:10.101 17:22:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:10.101 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:10.101 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.101 17:22:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:10.101 17:22:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.101 17:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:10.360 17:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:10.360 { 00:04:10.360 "nbd_device": "/dev/nbd0", 00:04:10.360 "bdev_name": "Malloc0" 00:04:10.360 }, 00:04:10.360 { 00:04:10.360 "nbd_device": "/dev/nbd1", 00:04:10.360 "bdev_name": "Malloc1" 00:04:10.360 } 00:04:10.360 ]' 00:04:10.360 17:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:10.360 { 00:04:10.360 "nbd_device": "/dev/nbd0", 00:04:10.360 "bdev_name": "Malloc0" 00:04:10.360 }, 00:04:10.360 { 00:04:10.360 "nbd_device": "/dev/nbd1", 00:04:10.360 "bdev_name": "Malloc1" 00:04:10.360 } 00:04:10.360 ]' 00:04:10.360 17:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:10.360 17:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:10.360 /dev/nbd1' 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:10.361 /dev/nbd1' 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:10.361 256+0 records in 00:04:10.361 256+0 records out 00:04:10.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995902 s, 105 MB/s 00:04:10.361 17:22:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:10.620 256+0 records in 00:04:10.620 256+0 records out 00:04:10.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141559 s, 74.1 MB/s 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:10.620 256+0 records in 00:04:10.620 256+0 records out 00:04:10.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153219 s, 68.4 MB/s 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:10.620 17:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:10.879 17:22:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:11.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:11.138 17:22:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:11.397 17:22:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:11.655 [2024-11-19 17:22:13.682735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.655 [2024-11-19 17:22:13.720066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.655 [2024-11-19 17:22:13.720067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.655 [2024-11-19 17:22:13.761366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:11.655 [2024-11-19 17:22:13.761408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:15.073 17:22:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:15.073 17:22:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:15.073 spdk_app_start Round 2 00:04:15.073 17:22:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3267727 /var/tmp/spdk-nbd.sock 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3267727 ']' 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.073 17:22:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:15.073 17:22:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:15.073 Malloc0 00:04:15.073 17:22:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:15.073 Malloc1 00:04:15.073 17:22:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.073 17:22:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:15.331 /dev/nbd0 00:04:15.331 17:22:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:15.331 17:22:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.331 1+0 records in 00:04:15.331 1+0 records out 00:04:15.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233245 s, 17.6 MB/s 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:15.331 17:22:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:15.331 17:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.331 17:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.331 17:22:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.590 /dev/nbd1 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.590 1+0 records in 00:04:15.590 1+0 records out 00:04:15.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202633 s, 20.2 MB/s 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:15.590 17:22:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.590 17:22:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.848 { 00:04:15.848 "nbd_device": "/dev/nbd0", 00:04:15.848 "bdev_name": "Malloc0" 00:04:15.848 }, 00:04:15.848 { 00:04:15.848 "nbd_device": "/dev/nbd1", 00:04:15.848 "bdev_name": "Malloc1" 00:04:15.848 } 00:04:15.848 ]' 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.848 { 00:04:15.848 "nbd_device": "/dev/nbd0", 00:04:15.848 "bdev_name": "Malloc0" 00:04:15.848 }, 00:04:15.848 { 00:04:15.848 "nbd_device": "/dev/nbd1", 00:04:15.848 "bdev_name": "Malloc1" 00:04:15.848 } 00:04:15.848 ]' 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.848 /dev/nbd1' 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.848 /dev/nbd1' 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.848 17:22:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.849 256+0 records in 00:04:15.849 256+0 records out 00:04:15.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106643 s, 98.3 MB/s 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.849 256+0 records in 00:04:15.849 256+0 records out 00:04:15.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145709 s, 72.0 MB/s 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.849 256+0 records in 00:04:15.849 256+0 records out 00:04:15.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157614 s, 66.5 MB/s 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.849 17:22:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.107 17:22:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:16.108 17:22:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.366 17:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:16.625 17:22:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:16.625 17:22:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.884 17:22:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:16.884 [2024-11-19 17:22:19.050013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:16.884 [2024-11-19 17:22:19.087160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.884 [2024-11-19 17:22:19.087162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.152 [2024-11-19 17:22:19.128526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:17.152 [2024-11-19 17:22:19.128568] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:20.444 17:22:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3267727 /var/tmp/spdk-nbd.sock 00:04:20.444 17:22:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3267727 ']' 00:04:20.444 17:22:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.444 17:22:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.444 17:22:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.444 17:22:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.444 17:22:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:20.444 17:22:22 event.app_repeat -- event/event.sh@39 -- # killprocess 3267727 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3267727 ']' 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3267727 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3267727 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3267727' 00:04:20.444 killing process with pid 3267727 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3267727 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3267727 00:04:20.444 spdk_app_start is called in Round 0. 00:04:20.444 Shutdown signal received, stop current app iteration 00:04:20.444 Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 reinitialization... 00:04:20.444 spdk_app_start is called in Round 1. 00:04:20.444 Shutdown signal received, stop current app iteration 00:04:20.444 Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 reinitialization... 00:04:20.444 spdk_app_start is called in Round 2. 00:04:20.444 Shutdown signal received, stop current app iteration 00:04:20.444 Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 reinitialization... 00:04:20.444 spdk_app_start is called in Round 3. 00:04:20.444 Shutdown signal received, stop current app iteration 00:04:20.444 17:22:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:20.444 17:22:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:20.444 00:04:20.444 real 0m16.526s 00:04:20.444 user 0m36.415s 00:04:20.444 sys 0m2.585s 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.444 17:22:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.444 ************************************ 00:04:20.444 END TEST app_repeat 00:04:20.444 ************************************ 00:04:20.444 17:22:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:20.444 17:22:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:20.444 17:22:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.444 17:22:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.444 17:22:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.444 ************************************ 00:04:20.444 START TEST cpu_locks 00:04:20.444 ************************************ 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:20.444 * Looking for test storage... 00:04:20.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.444 17:22:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.444 --rc genhtml_branch_coverage=1 00:04:20.444 --rc genhtml_function_coverage=1 00:04:20.444 --rc genhtml_legend=1 00:04:20.444 --rc geninfo_all_blocks=1 00:04:20.444 --rc geninfo_unexecuted_blocks=1 00:04:20.444 00:04:20.444 ' 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.444 --rc genhtml_branch_coverage=1 00:04:20.444 --rc genhtml_function_coverage=1 00:04:20.444 --rc genhtml_legend=1 00:04:20.444 --rc geninfo_all_blocks=1 00:04:20.444 --rc geninfo_unexecuted_blocks=1 00:04:20.444 00:04:20.444 ' 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.444 --rc genhtml_branch_coverage=1 00:04:20.444 --rc genhtml_function_coverage=1 00:04:20.444 --rc genhtml_legend=1 00:04:20.444 --rc geninfo_all_blocks=1 00:04:20.444 --rc geninfo_unexecuted_blocks=1 00:04:20.444 00:04:20.444 ' 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.444 --rc genhtml_branch_coverage=1 00:04:20.444 --rc genhtml_function_coverage=1 00:04:20.444 --rc genhtml_legend=1 00:04:20.444 --rc geninfo_all_blocks=1 00:04:20.444 --rc geninfo_unexecuted_blocks=1 00:04:20.444 00:04:20.444 ' 00:04:20.444 17:22:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:20.444 17:22:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:20.444 17:22:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:20.444 17:22:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.444 17:22:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.445 17:22:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.445 ************************************ 00:04:20.445 START TEST default_locks 00:04:20.445 ************************************ 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3270831 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3270831 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3270831 ']' 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.445 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.445 [2024-11-19 17:22:22.627290] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:20.445 [2024-11-19 17:22:22.627336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270831 ] 00:04:20.704 [2024-11-19 17:22:22.702609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.704 [2024-11-19 17:22:22.743545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.963 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.963 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:20.963 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3270831 00:04:20.964 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3270831 00:04:20.964 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:21.532 lslocks: write error 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3270831 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3270831 ']' 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3270831 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3270831 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3270831' 00:04:21.532 killing process with pid 3270831 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3270831 00:04:21.532 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3270831 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3270831 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3270831 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3270831 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3270831 ']' 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3270831) - No such process 00:04:21.792 ERROR: process (pid: 3270831) is no longer running 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:21.792 00:04:21.792 real 0m1.358s 00:04:21.792 user 0m1.314s 00:04:21.792 sys 0m0.593s 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.792 17:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.792 ************************************ 00:04:21.792 END TEST default_locks 00:04:21.792 ************************************ 00:04:21.792 17:22:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:21.792 17:22:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.792 17:22:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.792 17:22:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.792 ************************************ 00:04:21.792 START TEST default_locks_via_rpc 00:04:21.792 ************************************ 00:04:21.792 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:21.792 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3271118 00:04:21.792 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3271118 00:04:21.792 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.792 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3271118 ']' 00:04:21.793 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.793 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.793 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.793 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.793 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.052 [2024-11-19 17:22:24.055766] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:22.052 [2024-11-19 17:22:24.055818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271118 ] 00:04:22.052 [2024-11-19 17:22:24.131371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.052 [2024-11-19 17:22:24.173749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3271118 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3271118 00:04:22.312 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3271118 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3271118 ']' 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3271118 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271118 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271118' 00:04:22.571 killing process with pid 3271118 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3271118 00:04:22.571 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3271118 00:04:22.830 00:04:22.830 real 0m0.965s 00:04:22.830 user 0m0.920s 00:04:22.830 sys 0m0.447s 00:04:22.830 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.830 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.830 ************************************ 00:04:22.830 END TEST default_locks_via_rpc 00:04:22.830 ************************************ 00:04:22.830 17:22:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:22.830 17:22:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.830 17:22:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.830 17:22:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:22.830 ************************************ 00:04:22.830 START TEST non_locking_app_on_locked_coremask 00:04:22.831 ************************************ 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3271247 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3271247 /var/tmp/spdk.sock 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3271247 ']' 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.831 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.089 [2024-11-19 17:22:25.090587] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:23.089 [2024-11-19 17:22:25.090629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271247 ] 00:04:23.089 [2024-11-19 17:22:25.166285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.089 [2024-11-19 17:22:25.210137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3271413 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3271413 /var/tmp/spdk2.sock 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3271413 ']' 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.348 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:23.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:23.349 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.349 17:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.349 [2024-11-19 17:22:25.475558] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:23.349 [2024-11-19 17:22:25.475607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271413 ] 00:04:23.349 [2024-11-19 17:22:25.565465] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:23.349 [2024-11-19 17:22:25.565491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.607 [2024-11-19 17:22:25.654531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.175 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.175 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:24.175 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3271247 00:04:24.176 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:24.176 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3271247 00:04:24.744 lslocks: write error 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3271247 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3271247 ']' 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3271247 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271247 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271247' 00:04:24.744 killing process with pid 3271247 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3271247 00:04:24.744 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3271247 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3271413 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3271413 ']' 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3271413 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271413 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271413' 00:04:25.681 killing process with pid 3271413 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3271413 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3271413 00:04:25.681 00:04:25.681 real 0m2.858s 00:04:25.681 user 0m3.022s 00:04:25.681 sys 0m0.927s 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.681 17:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.681 ************************************ 00:04:25.681 END TEST non_locking_app_on_locked_coremask 00:04:25.681 ************************************ 00:04:25.941 17:22:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:25.941 17:22:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.941 17:22:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.941 17:22:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 ************************************ 00:04:25.941 START TEST locking_app_on_unlocked_coremask 00:04:25.941 ************************************ 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3271766 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3271766 /var/tmp/spdk.sock 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3271766 ']' 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.941 17:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 [2024-11-19 17:22:28.018853] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:25.941 [2024-11-19 17:22:28.018896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271766 ] 00:04:25.941 [2024-11-19 17:22:28.091014] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:25.941 [2024-11-19 17:22:28.091040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.941 [2024-11-19 17:22:28.135006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3271969 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3271969 /var/tmp/spdk2.sock 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3271969 ']' 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:26.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.200 17:22:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.200 [2024-11-19 17:22:28.405317] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:26.201 [2024-11-19 17:22:28.405369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271969 ] 00:04:26.460 [2024-11-19 17:22:28.498043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.460 [2024-11-19 17:22:28.586792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.030 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.030 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:27.030 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3271969 00:04:27.030 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3271969 00:04:27.030 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:27.599 lslocks: write error 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3271766 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3271766 ']' 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3271766 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271766 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271766' 00:04:27.599 killing process with pid 3271766 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3271766 00:04:27.599 17:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3271766 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3271969 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3271969 ']' 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3271969 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271969 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271969' 00:04:28.168 killing process with pid 3271969 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3271969 00:04:28.168 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3271969 00:04:28.736 00:04:28.736 real 0m2.701s 00:04:28.736 user 0m2.869s 00:04:28.736 sys 0m0.868s 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.736 ************************************ 00:04:28.736 END TEST locking_app_on_unlocked_coremask 00:04:28.736 ************************************ 00:04:28.736 17:22:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:28.736 17:22:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.736 17:22:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.736 17:22:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.736 ************************************ 00:04:28.736 START TEST locking_app_on_locked_coremask 00:04:28.736 ************************************ 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3272276 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3272276 /var/tmp/spdk.sock 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3272276 ']' 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.736 17:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.736 [2024-11-19 17:22:30.791158] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:28.736 [2024-11-19 17:22:30.791201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272276 ] 00:04:28.736 [2024-11-19 17:22:30.867706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.736 [2024-11-19 17:22:30.910615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3272465 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3272465 /var/tmp/spdk2.sock 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3272465 /var/tmp/spdk2.sock 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3272465 /var/tmp/spdk2.sock 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3272465 ']' 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:28.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.996 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.996 [2024-11-19 17:22:31.183775] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:28.996 [2024-11-19 17:22:31.183827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272465 ] 00:04:29.255 [2024-11-19 17:22:31.276391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3272276 has claimed it. 00:04:29.255 [2024-11-19 17:22:31.276428] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:29.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3272465) - No such process 00:04:29.822 ERROR: process (pid: 3272465) is no longer running 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3272276 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3272276 00:04:29.822 17:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.390 lslocks: write error 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3272276 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3272276 ']' 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3272276 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272276 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272276' 00:04:30.390 killing process with pid 3272276 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3272276 00:04:30.390 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3272276 00:04:30.650 00:04:30.650 real 0m1.943s 00:04:30.650 user 0m2.076s 00:04:30.650 sys 0m0.650s 00:04:30.650 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.650 17:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.650 ************************************ 00:04:30.650 END TEST locking_app_on_locked_coremask 00:04:30.650 ************************************ 00:04:30.650 17:22:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:30.650 17:22:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.650 17:22:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.650 17:22:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.650 ************************************ 00:04:30.650 START TEST locking_overlapped_coremask 00:04:30.650 ************************************ 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3272733 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3272733 /var/tmp/spdk.sock 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3272733 ']' 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.650 17:22:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.650 [2024-11-19 17:22:32.801337] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:30.650 [2024-11-19 17:22:32.801378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272733 ] 00:04:30.909 [2024-11-19 17:22:32.877011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:30.909 [2024-11-19 17:22:32.921889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.909 [2024-11-19 17:22:32.921995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.909 [2024-11-19 17:22:32.921996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3272738 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3272738 /var/tmp/spdk2.sock 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3272738 /var/tmp/spdk2.sock 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3272738 /var/tmp/spdk2.sock 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3272738 ']' 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.169 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.169 [2024-11-19 17:22:33.189137] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:31.169 [2024-11-19 17:22:33.189187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272738 ] 00:04:31.169 [2024-11-19 17:22:33.282622] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3272733 has claimed it. 00:04:31.169 [2024-11-19 17:22:33.282657] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:31.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3272738) - No such process 00:04:31.738 ERROR: process (pid: 3272738) is no longer running 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3272733 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3272733 ']' 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3272733 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272733 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272733' 00:04:31.738 killing process with pid 3272733 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3272733 00:04:31.738 17:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3272733 00:04:31.996 00:04:31.996 real 0m1.435s 00:04:31.996 user 0m3.943s 00:04:31.996 sys 0m0.402s 00:04:31.996 17:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.996 17:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.996 ************************************ 00:04:31.996 END TEST locking_overlapped_coremask 00:04:31.996 ************************************ 00:04:31.996 17:22:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:31.996 17:22:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.256 17:22:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.256 17:22:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.256 ************************************ 00:04:32.256 START TEST locking_overlapped_coremask_via_rpc 00:04:32.256 ************************************ 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3272995 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3272995 /var/tmp/spdk.sock 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3272995 ']' 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.256 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.256 [2024-11-19 17:22:34.305051] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:32.257 [2024-11-19 17:22:34.305092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272995 ] 00:04:32.257 [2024-11-19 17:22:34.366211] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.257 [2024-11-19 17:22:34.366237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.257 [2024-11-19 17:22:34.413967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.257 [2024-11-19 17:22:34.414003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.257 [2024-11-19 17:22:34.414004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3273011 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3273011 /var/tmp/spdk2.sock 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3273011 ']' 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.516 17:22:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.516 [2024-11-19 17:22:34.676835] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:32.516 [2024-11-19 17:22:34.676882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273011 ] 00:04:32.775 [2024-11-19 17:22:34.769492] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.775 [2024-11-19 17:22:34.769520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.775 [2024-11-19 17:22:34.857190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.775 [2024-11-19 17:22:34.857303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.775 [2024-11-19 17:22:34.857304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:33.343 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.343 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.343 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.344 [2024-11-19 17:22:35.532022] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3272995 has claimed it. 00:04:33.344 request: 00:04:33.344 { 00:04:33.344 "method": "framework_enable_cpumask_locks", 00:04:33.344 "req_id": 1 00:04:33.344 } 00:04:33.344 Got JSON-RPC error response 00:04:33.344 response: 00:04:33.344 { 00:04:33.344 "code": -32603, 00:04:33.344 "message": "Failed to claim CPU core: 2" 00:04:33.344 } 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3272995 /var/tmp/spdk.sock 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3272995 ']' 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.344 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3273011 /var/tmp/spdk2.sock 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3273011 ']' 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.603 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:33.862 00:04:33.862 real 0m1.687s 00:04:33.862 user 0m0.856s 00:04:33.862 sys 0m0.135s 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.862 17:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.862 ************************************ 00:04:33.862 END TEST locking_overlapped_coremask_via_rpc 00:04:33.862 ************************************ 00:04:33.862 17:22:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:33.862 17:22:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3272995 ]] 00:04:33.863 17:22:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3272995 00:04:33.863 17:22:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3272995 ']' 00:04:33.863 17:22:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3272995 00:04:33.863 17:22:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:33.863 17:22:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.863 17:22:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272995 00:04:33.863 17:22:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.863 17:22:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.863 17:22:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272995' 00:04:33.863 killing process with pid 3272995 00:04:33.863 17:22:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3272995 00:04:33.863 17:22:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3272995 00:04:34.122 17:22:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3273011 ]] 00:04:34.122 17:22:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3273011 00:04:34.122 17:22:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3273011 ']' 00:04:34.122 17:22:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3273011 00:04:34.122 17:22:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:34.122 17:22:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.122 17:22:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3273011 00:04:34.380 17:22:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:34.380 17:22:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:34.380 17:22:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3273011' 00:04:34.380 killing process with pid 3273011 00:04:34.380 17:22:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3273011 00:04:34.381 17:22:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3273011 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3272995 ]] 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3272995 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3272995 ']' 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3272995 00:04:34.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3272995) - No such process 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3272995 is not found' 00:04:34.640 Process with pid 3272995 is not found 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3273011 ]] 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3273011 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3273011 ']' 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3273011 00:04:34.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3273011) - No such process 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3273011 is not found' 00:04:34.640 Process with pid 3273011 is not found 00:04:34.640 17:22:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:34.640 00:04:34.640 real 0m14.346s 00:04:34.640 user 0m24.680s 00:04:34.640 sys 0m4.982s 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.640 17:22:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 ************************************ 00:04:34.640 END TEST cpu_locks 00:04:34.640 ************************************ 00:04:34.640 00:04:34.640 real 0m39.193s 00:04:34.640 user 1m14.393s 00:04:34.640 sys 0m8.537s 00:04:34.640 17:22:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.640 17:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 ************************************ 00:04:34.640 END TEST event 00:04:34.640 ************************************ 00:04:34.640 17:22:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:34.640 17:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.640 17:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.640 17:22:36 -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 ************************************ 00:04:34.640 START TEST thread 00:04:34.640 ************************************ 00:04:34.640 17:22:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:34.900 * Looking for test storage... 00:04:34.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.900 17:22:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.900 17:22:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.900 17:22:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.900 17:22:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.900 17:22:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.900 17:22:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.900 17:22:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.900 17:22:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.900 17:22:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.900 17:22:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.900 17:22:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.900 17:22:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:34.900 17:22:36 thread -- scripts/common.sh@345 -- # : 1 00:04:34.900 17:22:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.900 17:22:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.900 17:22:36 thread -- scripts/common.sh@365 -- # decimal 1 00:04:34.900 17:22:36 thread -- scripts/common.sh@353 -- # local d=1 00:04:34.900 17:22:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.900 17:22:36 thread -- scripts/common.sh@355 -- # echo 1 00:04:34.900 17:22:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.900 17:22:36 thread -- scripts/common.sh@366 -- # decimal 2 00:04:34.900 17:22:36 thread -- scripts/common.sh@353 -- # local d=2 00:04:34.900 17:22:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.900 17:22:36 thread -- scripts/common.sh@355 -- # echo 2 00:04:34.900 17:22:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.900 17:22:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.900 17:22:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.900 17:22:36 thread -- scripts/common.sh@368 -- # return 0 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.900 --rc genhtml_branch_coverage=1 00:04:34.900 --rc genhtml_function_coverage=1 00:04:34.900 --rc genhtml_legend=1 00:04:34.900 --rc geninfo_all_blocks=1 00:04:34.900 --rc geninfo_unexecuted_blocks=1 00:04:34.900 00:04:34.900 ' 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.900 --rc genhtml_branch_coverage=1 00:04:34.900 --rc genhtml_function_coverage=1 00:04:34.900 --rc genhtml_legend=1 00:04:34.900 --rc geninfo_all_blocks=1 00:04:34.900 --rc geninfo_unexecuted_blocks=1 00:04:34.900 00:04:34.900 ' 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.900 --rc genhtml_branch_coverage=1 00:04:34.900 --rc genhtml_function_coverage=1 00:04:34.900 --rc genhtml_legend=1 00:04:34.900 --rc geninfo_all_blocks=1 00:04:34.900 --rc geninfo_unexecuted_blocks=1 00:04:34.900 00:04:34.900 ' 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.900 --rc genhtml_branch_coverage=1 00:04:34.900 --rc genhtml_function_coverage=1 00:04:34.900 --rc genhtml_legend=1 00:04:34.900 --rc geninfo_all_blocks=1 00:04:34.900 --rc geninfo_unexecuted_blocks=1 00:04:34.900 00:04:34.900 ' 00:04:34.900 17:22:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.900 17:22:36 thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.900 ************************************ 00:04:34.900 START TEST thread_poller_perf 00:04:34.900 ************************************ 00:04:34.900 17:22:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:34.900 [2024-11-19 17:22:37.040603] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:34.900 [2024-11-19 17:22:37.040661] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273571 ] 00:04:34.900 [2024-11-19 17:22:37.116595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.159 [2024-11-19 17:22:37.157289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.159 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:36.096 [2024-11-19T16:22:38.319Z] ====================================== 00:04:36.096 [2024-11-19T16:22:38.319Z] busy:2306823696 (cyc) 00:04:36.096 [2024-11-19T16:22:38.319Z] total_run_count: 405000 00:04:36.096 [2024-11-19T16:22:38.319Z] tsc_hz: 2300000000 (cyc) 00:04:36.096 [2024-11-19T16:22:38.319Z] ====================================== 00:04:36.096 [2024-11-19T16:22:38.319Z] poller_cost: 5695 (cyc), 2476 (nsec) 00:04:36.096 00:04:36.096 real 0m1.180s 00:04:36.096 user 0m1.107s 00:04:36.096 sys 0m0.069s 00:04:36.096 17:22:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.096 17:22:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 ************************************ 00:04:36.096 END TEST thread_poller_perf 00:04:36.096 ************************************ 00:04:36.096 17:22:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:36.096 17:22:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:36.096 17:22:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.096 17:22:38 thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 ************************************ 00:04:36.096 START TEST thread_poller_perf 00:04:36.096 ************************************ 00:04:36.096 17:22:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:36.096 [2024-11-19 17:22:38.290931] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:36.096 [2024-11-19 17:22:38.291009] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273821 ] 00:04:36.355 [2024-11-19 17:22:38.368107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.355 [2024-11-19 17:22:38.409641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.355 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:37.292 [2024-11-19T16:22:39.515Z] ====================================== 00:04:37.292 [2024-11-19T16:22:39.515Z] busy:2301727482 (cyc) 00:04:37.292 [2024-11-19T16:22:39.515Z] total_run_count: 5344000 00:04:37.292 [2024-11-19T16:22:39.515Z] tsc_hz: 2300000000 (cyc) 00:04:37.292 [2024-11-19T16:22:39.515Z] ====================================== 00:04:37.292 [2024-11-19T16:22:39.515Z] poller_cost: 430 (cyc), 186 (nsec) 00:04:37.292 00:04:37.292 real 0m1.180s 00:04:37.292 user 0m1.103s 00:04:37.292 sys 0m0.073s 00:04:37.292 17:22:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.292 17:22:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.292 ************************************ 00:04:37.292 END TEST thread_poller_perf 00:04:37.292 ************************************ 00:04:37.292 17:22:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:37.292 00:04:37.292 real 0m2.673s 00:04:37.292 user 0m2.378s 00:04:37.292 sys 0m0.310s 00:04:37.292 17:22:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.292 17:22:39 thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.292 ************************************ 00:04:37.292 END TEST thread 00:04:37.292 ************************************ 00:04:37.552 17:22:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:37.552 17:22:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:37.552 17:22:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.552 17:22:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.552 17:22:39 -- common/autotest_common.sh@10 -- # set +x 00:04:37.552 ************************************ 00:04:37.552 START TEST app_cmdline 00:04:37.552 ************************************ 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:37.552 * Looking for test storage... 00:04:37.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.552 17:22:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.552 --rc genhtml_branch_coverage=1 00:04:37.552 --rc genhtml_function_coverage=1 00:04:37.552 --rc genhtml_legend=1 00:04:37.552 --rc geninfo_all_blocks=1 00:04:37.552 --rc geninfo_unexecuted_blocks=1 00:04:37.552 00:04:37.552 ' 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.552 --rc genhtml_branch_coverage=1 00:04:37.552 --rc genhtml_function_coverage=1 00:04:37.552 --rc genhtml_legend=1 00:04:37.552 --rc geninfo_all_blocks=1 00:04:37.552 --rc geninfo_unexecuted_blocks=1 00:04:37.552 00:04:37.552 ' 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.552 --rc genhtml_branch_coverage=1 00:04:37.552 --rc genhtml_function_coverage=1 00:04:37.552 --rc genhtml_legend=1 00:04:37.552 --rc geninfo_all_blocks=1 00:04:37.552 --rc geninfo_unexecuted_blocks=1 00:04:37.552 00:04:37.552 ' 00:04:37.552 17:22:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.552 --rc genhtml_branch_coverage=1 00:04:37.552 --rc genhtml_function_coverage=1 00:04:37.552 --rc genhtml_legend=1 00:04:37.552 --rc geninfo_all_blocks=1 00:04:37.552 --rc geninfo_unexecuted_blocks=1 00:04:37.552 00:04:37.552 ' 00:04:37.552 17:22:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:37.552 17:22:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3274120 00:04:37.552 17:22:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:37.553 17:22:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3274120 00:04:37.553 17:22:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3274120 ']' 00:04:37.553 17:22:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.553 17:22:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.553 17:22:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.553 17:22:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.553 17:22:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:37.812 [2024-11-19 17:22:39.779303] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:37.812 [2024-11-19 17:22:39.779349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274120 ] 00:04:37.812 [2024-11-19 17:22:39.855380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.812 [2024-11-19 17:22:39.896308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.071 17:22:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.071 17:22:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:38.071 17:22:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:38.071 { 00:04:38.071 "version": "SPDK v25.01-pre git sha1 ea8382642", 00:04:38.071 "fields": { 00:04:38.071 "major": 25, 00:04:38.071 "minor": 1, 00:04:38.071 "patch": 0, 00:04:38.071 "suffix": "-pre", 00:04:38.071 "commit": "ea8382642" 00:04:38.071 } 00:04:38.071 } 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:38.331 17:22:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:38.331 17:22:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:38.331 request: 00:04:38.331 { 00:04:38.331 "method": "env_dpdk_get_mem_stats", 00:04:38.331 "req_id": 1 00:04:38.331 } 00:04:38.331 Got JSON-RPC error response 00:04:38.331 response: 00:04:38.331 { 00:04:38.331 "code": -32601, 00:04:38.331 "message": "Method not found" 00:04:38.331 } 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.590 17:22:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3274120 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3274120 ']' 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3274120 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3274120 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3274120' 00:04:38.590 killing process with pid 3274120 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 3274120 00:04:38.590 17:22:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 3274120 00:04:38.849 00:04:38.849 real 0m1.356s 00:04:38.849 user 0m1.564s 00:04:38.849 sys 0m0.467s 00:04:38.849 17:22:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.849 17:22:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:38.849 ************************************ 00:04:38.849 END TEST app_cmdline 00:04:38.849 ************************************ 00:04:38.849 17:22:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:38.849 17:22:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.849 17:22:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.849 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:04:38.849 ************************************ 00:04:38.849 START TEST version 00:04:38.849 ************************************ 00:04:38.849 17:22:40 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:38.849 * Looking for test storage... 00:04:39.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:39.109 17:22:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.109 17:22:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.109 17:22:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.109 17:22:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.109 17:22:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.109 17:22:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.109 17:22:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.109 17:22:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.109 17:22:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.109 17:22:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.109 17:22:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.110 17:22:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.110 17:22:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.110 17:22:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.110 17:22:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.110 17:22:41 version -- scripts/common.sh@344 -- # case "$op" in 00:04:39.110 17:22:41 version -- scripts/common.sh@345 -- # : 1 00:04:39.110 17:22:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.110 17:22:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.110 17:22:41 version -- scripts/common.sh@365 -- # decimal 1 00:04:39.110 17:22:41 version -- scripts/common.sh@353 -- # local d=1 00:04:39.110 17:22:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.110 17:22:41 version -- scripts/common.sh@355 -- # echo 1 00:04:39.110 17:22:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.110 17:22:41 version -- scripts/common.sh@366 -- # decimal 2 00:04:39.110 17:22:41 version -- scripts/common.sh@353 -- # local d=2 00:04:39.110 17:22:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.110 17:22:41 version -- scripts/common.sh@355 -- # echo 2 00:04:39.110 17:22:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.110 17:22:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.110 17:22:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.110 17:22:41 version -- scripts/common.sh@368 -- # return 0 00:04:39.110 17:22:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.110 17:22:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.110 --rc genhtml_branch_coverage=1 00:04:39.110 --rc genhtml_function_coverage=1 00:04:39.110 --rc genhtml_legend=1 00:04:39.110 --rc geninfo_all_blocks=1 00:04:39.110 --rc geninfo_unexecuted_blocks=1 00:04:39.110 00:04:39.110 ' 00:04:39.110 17:22:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.110 --rc genhtml_branch_coverage=1 00:04:39.110 --rc genhtml_function_coverage=1 00:04:39.110 --rc genhtml_legend=1 00:04:39.110 --rc geninfo_all_blocks=1 00:04:39.110 --rc geninfo_unexecuted_blocks=1 00:04:39.110 00:04:39.110 ' 00:04:39.110 17:22:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.110 --rc genhtml_branch_coverage=1 00:04:39.110 --rc genhtml_function_coverage=1 00:04:39.110 --rc genhtml_legend=1 00:04:39.110 --rc geninfo_all_blocks=1 00:04:39.110 --rc geninfo_unexecuted_blocks=1 00:04:39.110 00:04:39.110 ' 00:04:39.110 17:22:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.110 --rc genhtml_branch_coverage=1 00:04:39.110 --rc genhtml_function_coverage=1 00:04:39.110 --rc genhtml_legend=1 00:04:39.110 --rc geninfo_all_blocks=1 00:04:39.110 --rc geninfo_unexecuted_blocks=1 00:04:39.110 00:04:39.110 ' 00:04:39.110 17:22:41 version -- app/version.sh@17 -- # get_header_version major 00:04:39.110 17:22:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # cut -f2 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:39.110 17:22:41 version -- app/version.sh@17 -- # major=25 00:04:39.110 17:22:41 version -- app/version.sh@18 -- # get_header_version minor 00:04:39.110 17:22:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # cut -f2 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:39.110 17:22:41 version -- app/version.sh@18 -- # minor=1 00:04:39.110 17:22:41 version -- app/version.sh@19 -- # get_header_version patch 00:04:39.110 17:22:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # cut -f2 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:39.110 17:22:41 version -- app/version.sh@19 -- # patch=0 00:04:39.110 17:22:41 version -- app/version.sh@20 -- # get_header_version suffix 00:04:39.110 17:22:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # cut -f2 00:04:39.110 17:22:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:39.110 17:22:41 version -- app/version.sh@20 -- # suffix=-pre 00:04:39.110 17:22:41 version -- app/version.sh@22 -- # version=25.1 00:04:39.110 17:22:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:39.110 17:22:41 version -- app/version.sh@28 -- # version=25.1rc0 00:04:39.110 17:22:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:39.110 17:22:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:39.110 17:22:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:39.110 17:22:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:39.110 00:04:39.110 real 0m0.243s 00:04:39.110 user 0m0.143s 00:04:39.110 sys 0m0.144s 00:04:39.110 17:22:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.110 17:22:41 version -- common/autotest_common.sh@10 -- # set +x 00:04:39.110 ************************************ 00:04:39.110 END TEST version 00:04:39.110 ************************************ 00:04:39.110 17:22:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:39.110 17:22:41 -- spdk/autotest.sh@194 -- # uname -s 00:04:39.110 17:22:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:39.110 17:22:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:39.110 17:22:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:39.110 17:22:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:39.110 17:22:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.110 17:22:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.110 17:22:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:39.110 17:22:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:39.110 17:22:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:39.110 17:22:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:39.110 17:22:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.110 17:22:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.370 ************************************ 00:04:39.370 START TEST nvmf_tcp 00:04:39.370 ************************************ 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:39.370 * Looking for test storage... 00:04:39.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.370 17:22:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.370 17:22:41 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.371 --rc genhtml_branch_coverage=1 00:04:39.371 --rc genhtml_function_coverage=1 00:04:39.371 --rc genhtml_legend=1 00:04:39.371 --rc geninfo_all_blocks=1 00:04:39.371 --rc geninfo_unexecuted_blocks=1 00:04:39.371 00:04:39.371 ' 00:04:39.371 17:22:41 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.371 --rc genhtml_branch_coverage=1 00:04:39.371 --rc genhtml_function_coverage=1 00:04:39.371 --rc genhtml_legend=1 00:04:39.371 --rc geninfo_all_blocks=1 00:04:39.371 --rc geninfo_unexecuted_blocks=1 00:04:39.371 00:04:39.371 ' 00:04:39.371 17:22:41 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.371 --rc genhtml_branch_coverage=1 00:04:39.371 --rc genhtml_function_coverage=1 00:04:39.371 --rc genhtml_legend=1 00:04:39.371 --rc geninfo_all_blocks=1 00:04:39.371 --rc geninfo_unexecuted_blocks=1 00:04:39.371 00:04:39.371 ' 00:04:39.371 17:22:41 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.371 --rc genhtml_branch_coverage=1 00:04:39.371 --rc genhtml_function_coverage=1 00:04:39.371 --rc genhtml_legend=1 00:04:39.371 --rc geninfo_all_blocks=1 00:04:39.371 --rc geninfo_unexecuted_blocks=1 00:04:39.371 00:04:39.371 ' 00:04:39.371 17:22:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:39.371 17:22:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:39.371 17:22:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:39.371 17:22:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:39.371 17:22:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.371 17:22:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.371 ************************************ 00:04:39.371 START TEST nvmf_target_core 00:04:39.371 ************************************ 00:04:39.371 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:39.631 * Looking for test storage... 00:04:39.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.631 --rc genhtml_branch_coverage=1 00:04:39.631 --rc genhtml_function_coverage=1 00:04:39.631 --rc genhtml_legend=1 00:04:39.631 --rc geninfo_all_blocks=1 00:04:39.631 --rc geninfo_unexecuted_blocks=1 00:04:39.631 00:04:39.631 ' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.631 --rc genhtml_branch_coverage=1 00:04:39.631 --rc genhtml_function_coverage=1 00:04:39.631 --rc genhtml_legend=1 00:04:39.631 --rc geninfo_all_blocks=1 00:04:39.631 --rc geninfo_unexecuted_blocks=1 00:04:39.631 00:04:39.631 ' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.631 --rc genhtml_branch_coverage=1 00:04:39.631 --rc genhtml_function_coverage=1 00:04:39.631 --rc genhtml_legend=1 00:04:39.631 --rc geninfo_all_blocks=1 00:04:39.631 --rc geninfo_unexecuted_blocks=1 00:04:39.631 00:04:39.631 ' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.631 --rc genhtml_branch_coverage=1 00:04:39.631 --rc genhtml_function_coverage=1 00:04:39.631 --rc genhtml_legend=1 00:04:39.631 --rc geninfo_all_blocks=1 00:04:39.631 --rc geninfo_unexecuted_blocks=1 00:04:39.631 00:04:39.631 ' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.631 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:39.632 ************************************ 00:04:39.632 START TEST nvmf_abort 00:04:39.632 ************************************ 00:04:39.632 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:39.892 * Looking for test storage... 00:04:39.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.892 --rc genhtml_branch_coverage=1 00:04:39.892 --rc genhtml_function_coverage=1 00:04:39.892 --rc genhtml_legend=1 00:04:39.892 --rc geninfo_all_blocks=1 00:04:39.892 --rc geninfo_unexecuted_blocks=1 00:04:39.892 00:04:39.892 ' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.892 --rc genhtml_branch_coverage=1 00:04:39.892 --rc genhtml_function_coverage=1 00:04:39.892 --rc genhtml_legend=1 00:04:39.892 --rc geninfo_all_blocks=1 00:04:39.892 --rc geninfo_unexecuted_blocks=1 00:04:39.892 00:04:39.892 ' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.892 --rc genhtml_branch_coverage=1 00:04:39.892 --rc genhtml_function_coverage=1 00:04:39.892 --rc genhtml_legend=1 00:04:39.892 --rc geninfo_all_blocks=1 00:04:39.892 --rc geninfo_unexecuted_blocks=1 00:04:39.892 00:04:39.892 ' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.892 --rc genhtml_branch_coverage=1 00:04:39.892 --rc genhtml_function_coverage=1 00:04:39.892 --rc genhtml_legend=1 00:04:39.892 --rc geninfo_all_blocks=1 00:04:39.892 --rc geninfo_unexecuted_blocks=1 00:04:39.892 00:04:39.892 ' 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.892 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.893 17:22:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:39.893 17:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:46.469 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:46.469 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:46.469 Found net devices under 0000:86:00.0: cvl_0_0 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:46.469 Found net devices under 0000:86:00.1: cvl_0_1 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:46.469 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:46.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:46.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:04:46.470 00:04:46.470 --- 10.0.0.2 ping statistics --- 00:04:46.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:46.470 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:46.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:46.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:04:46.470 00:04:46.470 --- 10.0.0.1 ping statistics --- 00:04:46.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:46.470 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.470 17:22:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3277801 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3277801 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3277801 ']' 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 [2024-11-19 17:22:48.058078] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:46.470 [2024-11-19 17:22:48.058127] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:46.470 [2024-11-19 17:22:48.136362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.470 [2024-11-19 17:22:48.179135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:46.470 [2024-11-19 17:22:48.179169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:46.470 [2024-11-19 17:22:48.179177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:46.470 [2024-11-19 17:22:48.179183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:46.470 [2024-11-19 17:22:48.179188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:46.470 [2024-11-19 17:22:48.180655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.470 [2024-11-19 17:22:48.180744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.470 [2024-11-19 17:22:48.180744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 [2024-11-19 17:22:48.329592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 Malloc0 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 Delay0 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 [2024-11-19 17:22:48.414281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.470 17:22:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:46.470 [2024-11-19 17:22:48.594124] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:49.007 Initializing NVMe Controllers 00:04:49.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:49.007 controller IO queue size 128 less than required 00:04:49.007 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:49.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:49.007 Initialization complete. Launching workers. 00:04:49.007 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36717 00:04:49.007 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36782, failed to submit 62 00:04:49.007 success 36721, unsuccessful 61, failed 0 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:49.007 rmmod nvme_tcp 00:04:49.007 rmmod nvme_fabrics 00:04:49.007 rmmod nvme_keyring 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3277801 ']' 00:04:49.007 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3277801 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3277801 ']' 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3277801 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277801 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277801' 00:04:49.008 killing process with pid 3277801 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3277801 00:04:49.008 17:22:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3277801 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:49.008 17:22:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:51.544 00:04:51.544 real 0m11.370s 00:04:51.544 user 0m12.283s 00:04:51.544 sys 0m5.445s 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:51.544 ************************************ 00:04:51.544 END TEST nvmf_abort 00:04:51.544 ************************************ 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:51.544 ************************************ 00:04:51.544 START TEST nvmf_ns_hotplug_stress 00:04:51.544 ************************************ 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:51.544 * Looking for test storage... 00:04:51.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.544 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.545 --rc genhtml_branch_coverage=1 00:04:51.545 --rc genhtml_function_coverage=1 00:04:51.545 --rc genhtml_legend=1 00:04:51.545 --rc geninfo_all_blocks=1 00:04:51.545 --rc geninfo_unexecuted_blocks=1 00:04:51.545 00:04:51.545 ' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.545 --rc genhtml_branch_coverage=1 00:04:51.545 --rc genhtml_function_coverage=1 00:04:51.545 --rc genhtml_legend=1 00:04:51.545 --rc geninfo_all_blocks=1 00:04:51.545 --rc geninfo_unexecuted_blocks=1 00:04:51.545 00:04:51.545 ' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.545 --rc genhtml_branch_coverage=1 00:04:51.545 --rc genhtml_function_coverage=1 00:04:51.545 --rc genhtml_legend=1 00:04:51.545 --rc geninfo_all_blocks=1 00:04:51.545 --rc geninfo_unexecuted_blocks=1 00:04:51.545 00:04:51.545 ' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.545 --rc genhtml_branch_coverage=1 00:04:51.545 --rc genhtml_function_coverage=1 00:04:51.545 --rc genhtml_legend=1 00:04:51.545 --rc geninfo_all_blocks=1 00:04:51.545 --rc geninfo_unexecuted_blocks=1 00:04:51.545 00:04:51.545 ' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:51.545 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:51.546 17:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:58.120 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:58.121 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:58.121 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:58.121 Found net devices under 0000:86:00.0: cvl_0_0 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:58.121 Found net devices under 0000:86:00.1: cvl_0_1 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:58.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:58.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:04:58.121 00:04:58.121 --- 10.0.0.2 ping statistics --- 00:04:58.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.121 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:58.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:58.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:04:58.121 00:04:58.121 --- 10.0.0.1 ping statistics --- 00:04:58.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.121 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3281830 00:04:58.121 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3281830 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3281830 ']' 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:58.122 [2024-11-19 17:22:59.490179] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:04:58.122 [2024-11-19 17:22:59.490225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:58.122 [2024-11-19 17:22:59.569178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.122 [2024-11-19 17:22:59.611604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:58.122 [2024-11-19 17:22:59.611639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:58.122 [2024-11-19 17:22:59.611646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.122 [2024-11-19 17:22:59.611652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.122 [2024-11-19 17:22:59.611657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:58.122 [2024-11-19 17:22:59.613056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.122 [2024-11-19 17:22:59.613161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.122 [2024-11-19 17:22:59.613163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:04:58.122 [2024-11-19 17:22:59.918335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.122 17:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:58.122 17:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:58.122 [2024-11-19 17:23:00.335818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:58.381 17:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:58.381 17:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:58.641 Malloc0 00:04:58.641 17:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:58.900 Delay0 00:04:58.900 17:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.158 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:59.158 NULL1 00:04:59.158 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:59.416 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3282234 00:04:59.416 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:59.416 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:04:59.416 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.675 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.961 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:59.961 17:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:59.961 true 00:05:00.220 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:00.220 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.220 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.479 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:00.479 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:00.738 true 00:05:00.738 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:00.738 17:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.997 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.257 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:01.257 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:01.257 true 00:05:01.257 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:01.257 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:01.516 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.775 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:01.775 17:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:02.034 true 00:05:02.034 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:02.034 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.313 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.613 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:02.613 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:02.613 true 00:05:02.613 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:02.613 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.908 17:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.168 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:03.168 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:03.168 true 00:05:03.168 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:03.168 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.427 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.686 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:03.686 17:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:03.945 true 00:05:03.945 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:03.945 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.204 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.463 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:04.463 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:04.463 true 00:05:04.463 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:04.463 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.723 17:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.982 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:04.982 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:05.241 true 00:05:05.241 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:05.241 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.501 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.501 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:05.501 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:05.760 true 00:05:05.760 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:05.760 17:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.019 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.278 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:06.278 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:06.537 true 00:05:06.537 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:06.537 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.796 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.796 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:06.796 17:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:07.055 true 00:05:07.055 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:07.055 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.314 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.574 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:07.574 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:07.574 true 00:05:07.833 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:07.834 17:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.834 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.092 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:08.092 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:08.351 true 00:05:08.351 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:08.351 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.610 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.870 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:08.870 17:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:08.870 true 00:05:09.129 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:09.129 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.129 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.389 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:09.389 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:09.648 true 00:05:09.648 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:09.648 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.908 17:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.167 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:10.167 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:10.167 true 00:05:10.167 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:10.167 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.427 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.686 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:10.686 17:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:10.945 true 00:05:10.945 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:10.945 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.204 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.204 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:11.204 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:11.463 true 00:05:11.463 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:11.463 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.722 17:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.982 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:11.982 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:12.239 true 00:05:12.239 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:12.239 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.497 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.497 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:12.497 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:12.756 true 00:05:12.756 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:12.756 17:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.015 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.273 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:13.273 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:13.531 true 00:05:13.531 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:13.531 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.531 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.790 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:13.790 17:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:14.049 true 00:05:14.049 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:14.049 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.308 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.567 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:14.567 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:14.567 true 00:05:14.567 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:14.567 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.826 17:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.085 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:15.085 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:15.344 true 00:05:15.344 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:15.344 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.603 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.861 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:15.861 17:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:15.861 true 00:05:15.861 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:15.861 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.119 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.378 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:16.378 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:16.636 true 00:05:16.636 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:16.636 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.895 17:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.895 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:16.895 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:17.154 true 00:05:17.154 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:17.154 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.413 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.672 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:17.672 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:17.932 true 00:05:17.932 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:17.932 17:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.192 17:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.192 17:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:18.192 17:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:18.451 true 00:05:18.451 17:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:18.451 17:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.711 17:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.970 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:18.970 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:18.970 true 00:05:19.229 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:19.229 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.229 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.488 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:19.488 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:19.747 true 00:05:19.747 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:19.747 17:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.006 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.265 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:20.265 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:20.524 true 00:05:20.524 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:20.524 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.783 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.784 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:20.784 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:21.042 true 00:05:21.042 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:21.042 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.302 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.561 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:21.561 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:21.820 true 00:05:21.820 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:21.820 17:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.080 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.080 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:22.080 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:22.339 true 00:05:22.339 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:22.339 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.597 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.856 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:22.856 17:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:23.116 true 00:05:23.116 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:23.116 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.375 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.375 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:23.375 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:23.634 true 00:05:23.634 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:23.634 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.893 17:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.152 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:24.152 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:24.412 true 00:05:24.412 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:24.412 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.671 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.671 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:24.671 17:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:24.931 true 00:05:24.931 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:24.931 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.190 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.449 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:25.449 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:25.708 true 00:05:25.708 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:25.708 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.967 17:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.967 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:25.967 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:26.226 true 00:05:26.226 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:26.226 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.486 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.745 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:26.745 17:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:27.005 true 00:05:27.005 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:27.005 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.264 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.264 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:27.264 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:27.524 true 00:05:27.524 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:27.524 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.784 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.043 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:28.043 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:28.302 true 00:05:28.302 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:28.302 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.561 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.561 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:28.561 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:28.821 true 00:05:28.821 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:28.821 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.080 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.338 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:29.338 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:29.598 true 00:05:29.598 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:29.598 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.857 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.857 Initializing NVMe Controllers 00:05:29.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:29.857 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:05:29.857 Controller IO queue size 128, less than required. 00:05:29.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:29.857 WARNING: Some requested NVMe devices were skipped 00:05:29.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:29.857 Initialization complete. Launching workers. 00:05:29.857 ======================================================== 00:05:29.857 Latency(us) 00:05:29.857 Device Information : IOPS MiB/s Average min max 00:05:29.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26827.93 13.10 4771.06 2535.33 8794.88 00:05:29.857 ======================================================== 00:05:29.857 Total : 26827.93 13.10 4771.06 2535.33 8794.88 00:05:29.857 00:05:29.857 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:29.857 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:30.117 true 00:05:30.117 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3282234 00:05:30.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3282234) - No such process 00:05:30.117 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3282234 00:05:30.117 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.376 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:30.635 null0 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:30.635 17:23:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:30.894 null1 00:05:30.894 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:30.894 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:30.894 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:31.153 null2 00:05:31.153 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.153 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.153 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:31.412 null3 00:05:31.413 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.413 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.413 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:31.413 null4 00:05:31.672 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.672 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.672 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:31.672 null5 00:05:31.672 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.672 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.672 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:31.931 null6 00:05:31.931 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.931 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.931 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:32.191 null7 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.191 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3287763 3287764 3287766 3287768 3287770 3287772 3287773 3287775 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.192 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.451 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.452 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:32.712 17:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:32.971 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.971 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.971 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.972 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.232 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.490 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.491 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:33.750 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.750 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:33.750 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.750 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.750 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.750 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.751 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.010 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.269 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.528 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.788 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.789 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.789 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.789 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.789 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.789 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.789 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.048 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.308 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.568 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.826 17:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.826 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.827 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.827 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.827 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.084 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:36.343 rmmod nvme_tcp 00:05:36.343 rmmod nvme_fabrics 00:05:36.343 rmmod nvme_keyring 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3281830 ']' 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3281830 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3281830 ']' 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3281830 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:36.343 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3281830 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3281830' 00:05:36.603 killing process with pid 3281830 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3281830 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3281830 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.603 17:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:39.141 00:05:39.141 real 0m47.593s 00:05:39.141 user 3m22.341s 00:05:39.141 sys 0m17.599s 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.141 ************************************ 00:05:39.141 END TEST nvmf_ns_hotplug_stress 00:05:39.141 ************************************ 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:39.141 ************************************ 00:05:39.141 START TEST nvmf_delete_subsystem 00:05:39.141 ************************************ 00:05:39.141 17:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:39.141 * Looking for test storage... 00:05:39.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:39.141 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.141 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.141 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.141 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.142 --rc genhtml_branch_coverage=1 00:05:39.142 --rc genhtml_function_coverage=1 00:05:39.142 --rc genhtml_legend=1 00:05:39.142 --rc geninfo_all_blocks=1 00:05:39.142 --rc geninfo_unexecuted_blocks=1 00:05:39.142 00:05:39.142 ' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.142 --rc genhtml_branch_coverage=1 00:05:39.142 --rc genhtml_function_coverage=1 00:05:39.142 --rc genhtml_legend=1 00:05:39.142 --rc geninfo_all_blocks=1 00:05:39.142 --rc geninfo_unexecuted_blocks=1 00:05:39.142 00:05:39.142 ' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.142 --rc genhtml_branch_coverage=1 00:05:39.142 --rc genhtml_function_coverage=1 00:05:39.142 --rc genhtml_legend=1 00:05:39.142 --rc geninfo_all_blocks=1 00:05:39.142 --rc geninfo_unexecuted_blocks=1 00:05:39.142 00:05:39.142 ' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.142 --rc genhtml_branch_coverage=1 00:05:39.142 --rc genhtml_function_coverage=1 00:05:39.142 --rc genhtml_legend=1 00:05:39.142 --rc geninfo_all_blocks=1 00:05:39.142 --rc geninfo_unexecuted_blocks=1 00:05:39.142 00:05:39.142 ' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:39.142 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:39.143 17:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:45.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:45.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:45.823 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:45.824 Found net devices under 0000:86:00.0: cvl_0_0 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:45.824 Found net devices under 0000:86:00.1: cvl_0_1 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:45.824 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:45.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:45.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:05:45.824 00:05:45.824 --- 10.0.0.2 ping statistics --- 00:05:45.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.824 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:45.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:45.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:05:45.824 00:05:45.824 --- 10.0.0.1 ping statistics --- 00:05:45.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.824 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3292378 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3292378 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3292378 ']' 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.824 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.824 [2024-11-19 17:23:47.176355] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:05:45.824 [2024-11-19 17:23:47.176398] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:45.824 [2024-11-19 17:23:47.257223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.824 [2024-11-19 17:23:47.298641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:45.825 [2024-11-19 17:23:47.298676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:45.825 [2024-11-19 17:23:47.298683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.825 [2024-11-19 17:23:47.298689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.825 [2024-11-19 17:23:47.298694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:45.825 [2024-11-19 17:23:47.299843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.825 [2024-11-19 17:23:47.299844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 [2024-11-19 17:23:47.436441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 [2024-11-19 17:23:47.456650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 NULL1 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 Delay0 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3292404 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:45.825 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:45.825 [2024-11-19 17:23:47.567602] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:47.732 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:47.732 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.732 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 starting I/O failed: -6 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 [2024-11-19 17:23:49.602578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a2c0 is same with the state(6) to be set 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Read completed with error (sct=0, sc=8) 00:05:47.732 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 starting I/O failed: -6 00:05:47.733 starting I/O failed: -6 00:05:47.733 starting I/O failed: -6 00:05:47.733 starting I/O failed: -6 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Read completed with error (sct=0, sc=8) 00:05:47.733 Write completed with error (sct=0, sc=8) 00:05:48.669 [2024-11-19 17:23:50.580437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b9a0 is same with the state(6) to be set 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.669 [2024-11-19 17:23:50.606297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a680 is same with the state(6) to be set 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Read completed with error (sct=0, sc=8) 00:05:48.669 Write completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 [2024-11-19 17:23:50.606427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a860 is same with the state(6) to be set 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 [2024-11-19 17:23:50.610217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7fac000c40 is same with the state(6) to be set 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Write completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 Read completed with error (sct=0, sc=8) 00:05:48.670 [2024-11-19 17:23:50.611514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7fac00d680 is same with the state(6) to be set 00:05:48.670 Initializing NVMe Controllers 00:05:48.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:48.670 Controller IO queue size 128, less than required. 00:05:48.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:48.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:48.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:48.670 Initialization complete. Launching workers. 00:05:48.670 ======================================================== 00:05:48.670 Latency(us) 00:05:48.670 Device Information : IOPS MiB/s Average min max 00:05:48.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.29 0.08 903543.87 263.97 1005959.99 00:05:48.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.29 0.08 913642.04 246.08 1011524.55 00:05:48.670 ======================================================== 00:05:48.670 Total : 329.58 0.16 908577.70 246.08 1011524.55 00:05:48.670 00:05:48.670 [2024-11-19 17:23:50.612135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178b9a0 (9): Bad file descriptor 00:05:48.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:48.670 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.670 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:48.670 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3292404 00:05:48.670 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3292404 00:05:48.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3292404) - No such process 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3292404 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3292404 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3292404 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.929 [2024-11-19 17:23:51.137289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.929 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.187 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3293036 00:05:49.187 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:49.187 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:49.187 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:49.187 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:49.187 [2024-11-19 17:23:51.229221] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:49.446 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:49.446 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:49.446 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:50.013 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:50.013 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:50.013 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:50.585 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:50.585 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:50.585 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.155 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.155 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:51.155 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.724 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.724 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:51.724 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.983 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.983 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:51.983 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:52.550 Initializing NVMe Controllers 00:05:52.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:52.550 Controller IO queue size 128, less than required. 00:05:52.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:52.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:52.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:52.550 Initialization complete. Launching workers. 00:05:52.550 ======================================================== 00:05:52.550 Latency(us) 00:05:52.550 Device Information : IOPS MiB/s Average min max 00:05:52.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002404.18 1000122.76 1041409.31 00:05:52.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004194.95 1000139.14 1010264.71 00:05:52.550 ======================================================== 00:05:52.550 Total : 256.00 0.12 1003299.56 1000122.76 1041409.31 00:05:52.550 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3293036 00:05:52.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3293036) - No such process 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3293036 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:52.550 rmmod nvme_tcp 00:05:52.550 rmmod nvme_fabrics 00:05:52.550 rmmod nvme_keyring 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3292378 ']' 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3292378 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3292378 ']' 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3292378 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.550 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292378 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292378' 00:05:52.809 killing process with pid 3292378 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3292378 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3292378 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:52.809 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:55.345 00:05:55.345 real 0m16.117s 00:05:55.345 user 0m28.962s 00:05:55.345 sys 0m5.565s 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.345 ************************************ 00:05:55.345 END TEST nvmf_delete_subsystem 00:05:55.345 ************************************ 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.345 ************************************ 00:05:55.345 START TEST nvmf_host_management 00:05:55.345 ************************************ 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:55.345 * Looking for test storage... 00:05:55.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.345 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.346 --rc genhtml_branch_coverage=1 00:05:55.346 --rc genhtml_function_coverage=1 00:05:55.346 --rc genhtml_legend=1 00:05:55.346 --rc geninfo_all_blocks=1 00:05:55.346 --rc geninfo_unexecuted_blocks=1 00:05:55.346 00:05:55.346 ' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.346 --rc genhtml_branch_coverage=1 00:05:55.346 --rc genhtml_function_coverage=1 00:05:55.346 --rc genhtml_legend=1 00:05:55.346 --rc geninfo_all_blocks=1 00:05:55.346 --rc geninfo_unexecuted_blocks=1 00:05:55.346 00:05:55.346 ' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.346 --rc genhtml_branch_coverage=1 00:05:55.346 --rc genhtml_function_coverage=1 00:05:55.346 --rc genhtml_legend=1 00:05:55.346 --rc geninfo_all_blocks=1 00:05:55.346 --rc geninfo_unexecuted_blocks=1 00:05:55.346 00:05:55.346 ' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.346 --rc genhtml_branch_coverage=1 00:05:55.346 --rc genhtml_function_coverage=1 00:05:55.346 --rc genhtml_legend=1 00:05:55.346 --rc geninfo_all_blocks=1 00:05:55.346 --rc geninfo_unexecuted_blocks=1 00:05:55.346 00:05:55.346 ' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:55.346 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:01.919 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:01.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:01.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:01.920 Found net devices under 0000:86:00.0: cvl_0_0 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:01.920 Found net devices under 0000:86:00.1: cvl_0_1 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:01.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:01.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:06:01.920 00:06:01.920 --- 10.0.0.2 ping statistics --- 00:06:01.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.920 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:01.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:01.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:06:01.920 00:06:01.920 --- 10.0.0.1 ping statistics --- 00:06:01.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.920 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3297238 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3297238 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:01.920 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3297238 ']' 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 [2024-11-19 17:24:03.363798] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:01.921 [2024-11-19 17:24:03.363839] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.921 [2024-11-19 17:24:03.446459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.921 [2024-11-19 17:24:03.489051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.921 [2024-11-19 17:24:03.489090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.921 [2024-11-19 17:24:03.489096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.921 [2024-11-19 17:24:03.489103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.921 [2024-11-19 17:24:03.489108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.921 [2024-11-19 17:24:03.490752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.921 [2024-11-19 17:24:03.490885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.921 [2024-11-19 17:24:03.490991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.921 [2024-11-19 17:24:03.490992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 [2024-11-19 17:24:03.640237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 Malloc0 00:06:01.921 [2024-11-19 17:24:03.718338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3297481 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3297481 /var/tmp/bdevperf.sock 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3297481 ']' 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:01.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:01.921 { 00:06:01.921 "params": { 00:06:01.921 "name": "Nvme$subsystem", 00:06:01.921 "trtype": "$TEST_TRANSPORT", 00:06:01.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:01.921 "adrfam": "ipv4", 00:06:01.921 "trsvcid": "$NVMF_PORT", 00:06:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:01.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:01.921 "hdgst": ${hdgst:-false}, 00:06:01.921 "ddgst": ${ddgst:-false} 00:06:01.921 }, 00:06:01.921 "method": "bdev_nvme_attach_controller" 00:06:01.921 } 00:06:01.921 EOF 00:06:01.921 )") 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:01.921 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:01.921 "params": { 00:06:01.921 "name": "Nvme0", 00:06:01.921 "trtype": "tcp", 00:06:01.921 "traddr": "10.0.0.2", 00:06:01.921 "adrfam": "ipv4", 00:06:01.921 "trsvcid": "4420", 00:06:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:01.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:01.921 "hdgst": false, 00:06:01.921 "ddgst": false 00:06:01.921 }, 00:06:01.921 "method": "bdev_nvme_attach_controller" 00:06:01.921 }' 00:06:01.921 [2024-11-19 17:24:03.813446] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:01.921 [2024-11-19 17:24:03.813491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297481 ] 00:06:01.921 [2024-11-19 17:24:03.890665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.921 [2024-11-19 17:24:03.932389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.921 Running I/O for 10 seconds... 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.489 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.750 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.751 [2024-11-19 17:24:04.741775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.741981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86200 is same with the state(6) to be set 00:06:02.751 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.751 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:02.751 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.751 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.751 [2024-11-19 17:24:04.749998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.751 [2024-11-19 17:24:04.750032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.751 [2024-11-19 17:24:04.750050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.751 [2024-11-19 17:24:04.750064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.751 [2024-11-19 17:24:04.750079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b500 is same with the state(6) to be set 00:06:02.751 [2024-11-19 17:24:04.750341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.751 [2024-11-19 17:24:04.750552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.751 [2024-11-19 17:24:04.750561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.750988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.751003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.751011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.751017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.751026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.751037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.751046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.751053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.751062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.752 [2024-11-19 17:24:04.751068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.752 [2024-11-19 17:24:04.751076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.751340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.753 [2024-11-19 17:24:04.751347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.753 [2024-11-19 17:24:04.752303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:02.753 task offset: 24576 on job bdev=Nvme0n1 fails 00:06:02.753 00:06:02.753 Latency(us) 00:06:02.753 [2024-11-19T16:24:04.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:02.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:02.753 Job: Nvme0n1 ended in about 0.61 seconds with error 00:06:02.753 Verification LBA range: start 0x0 length 0x400 00:06:02.753 Nvme0n1 : 0.61 1977.53 123.60 104.08 0.00 30110.28 1674.02 27696.08 00:06:02.753 [2024-11-19T16:24:04.976Z] =================================================================================================================== 00:06:02.753 [2024-11-19T16:24:04.976Z] Total : 1977.53 123.60 104.08 0.00 30110.28 1674.02 27696.08 00:06:02.753 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.753 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:02.753 [2024-11-19 17:24:04.754690] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.753 [2024-11-19 17:24:04.754710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b500 (9): Bad file descriptor 00:06:02.753 [2024-11-19 17:24:04.763353] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3297481 00:06:03.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3297481) - No such process 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:03.691 { 00:06:03.691 "params": { 00:06:03.691 "name": "Nvme$subsystem", 00:06:03.691 "trtype": "$TEST_TRANSPORT", 00:06:03.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:03.691 "adrfam": "ipv4", 00:06:03.691 "trsvcid": "$NVMF_PORT", 00:06:03.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:03.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:03.691 "hdgst": ${hdgst:-false}, 00:06:03.691 "ddgst": ${ddgst:-false} 00:06:03.691 }, 00:06:03.691 "method": "bdev_nvme_attach_controller" 00:06:03.691 } 00:06:03.691 EOF 00:06:03.691 )") 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:03.691 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:03.691 "params": { 00:06:03.691 "name": "Nvme0", 00:06:03.691 "trtype": "tcp", 00:06:03.691 "traddr": "10.0.0.2", 00:06:03.691 "adrfam": "ipv4", 00:06:03.691 "trsvcid": "4420", 00:06:03.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:03.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:03.691 "hdgst": false, 00:06:03.691 "ddgst": false 00:06:03.691 }, 00:06:03.691 "method": "bdev_nvme_attach_controller" 00:06:03.691 }' 00:06:03.691 [2024-11-19 17:24:05.807655] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:03.691 [2024-11-19 17:24:05.807703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297889 ] 00:06:03.691 [2024-11-19 17:24:05.881920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.951 [2024-11-19 17:24:05.923093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.210 Running I/O for 1 seconds... 00:06:05.147 1984.00 IOPS, 124.00 MiB/s 00:06:05.147 Latency(us) 00:06:05.147 [2024-11-19T16:24:07.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:05.147 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:05.147 Verification LBA range: start 0x0 length 0x400 00:06:05.147 Nvme0n1 : 1.02 2007.50 125.47 0.00 0.00 31377.80 5898.24 27468.13 00:06:05.147 [2024-11-19T16:24:07.370Z] =================================================================================================================== 00:06:05.147 [2024-11-19T16:24:07.370Z] Total : 2007.50 125.47 0.00 0.00 31377.80 5898.24 27468.13 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.406 rmmod nvme_tcp 00:06:05.406 rmmod nvme_fabrics 00:06:05.406 rmmod nvme_keyring 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3297238 ']' 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3297238 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3297238 ']' 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3297238 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3297238 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3297238' 00:06:05.406 killing process with pid 3297238 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3297238 00:06:05.406 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3297238 00:06:05.665 [2024-11-19 17:24:07.658659] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:05.665 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.666 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.666 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.666 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.666 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:07.587 00:06:07.587 real 0m12.654s 00:06:07.587 user 0m20.981s 00:06:07.587 sys 0m5.622s 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.587 ************************************ 00:06:07.587 END TEST nvmf_host_management 00:06:07.587 ************************************ 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.587 17:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 ************************************ 00:06:07.847 START TEST nvmf_lvol 00:06:07.847 ************************************ 00:06:07.847 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:07.847 * Looking for test storage... 00:06:07.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.847 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.847 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.847 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.848 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.848 --rc genhtml_branch_coverage=1 00:06:07.848 --rc genhtml_function_coverage=1 00:06:07.848 --rc genhtml_legend=1 00:06:07.848 --rc geninfo_all_blocks=1 00:06:07.848 --rc geninfo_unexecuted_blocks=1 00:06:07.848 00:06:07.848 ' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.848 --rc genhtml_branch_coverage=1 00:06:07.848 --rc genhtml_function_coverage=1 00:06:07.848 --rc genhtml_legend=1 00:06:07.848 --rc geninfo_all_blocks=1 00:06:07.848 --rc geninfo_unexecuted_blocks=1 00:06:07.848 00:06:07.848 ' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.848 --rc genhtml_branch_coverage=1 00:06:07.848 --rc genhtml_function_coverage=1 00:06:07.848 --rc genhtml_legend=1 00:06:07.848 --rc geninfo_all_blocks=1 00:06:07.848 --rc geninfo_unexecuted_blocks=1 00:06:07.848 00:06:07.848 ' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.848 --rc genhtml_branch_coverage=1 00:06:07.848 --rc genhtml_function_coverage=1 00:06:07.848 --rc genhtml_legend=1 00:06:07.848 --rc geninfo_all_blocks=1 00:06:07.848 --rc geninfo_unexecuted_blocks=1 00:06:07.848 00:06:07.848 ' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:07.848 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.849 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.420 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:14.421 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:14.421 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:14.421 Found net devices under 0000:86:00.0: cvl_0_0 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:14.421 Found net devices under 0000:86:00.1: cvl_0_1 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:06:14.421 00:06:14.421 --- 10.0.0.2 ping statistics --- 00:06:14.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.421 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:06:14.421 00:06:14.421 --- 10.0.0.1 ping statistics --- 00:06:14.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.421 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:14.421 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.422 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:14.422 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3301948 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3301948 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3301948 ']' 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:14.422 [2024-11-19 17:24:16.094309] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:14.422 [2024-11-19 17:24:16.094358] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.422 [2024-11-19 17:24:16.176205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.422 [2024-11-19 17:24:16.219107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.422 [2024-11-19 17:24:16.219140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.422 [2024-11-19 17:24:16.219148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.422 [2024-11-19 17:24:16.219154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.422 [2024-11-19 17:24:16.219159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.422 [2024-11-19 17:24:16.220491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.422 [2024-11-19 17:24:16.220596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.422 [2024-11-19 17:24:16.220598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:14.422 [2024-11-19 17:24:16.522390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.422 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:14.681 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:14.681 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:14.939 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:14.939 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:15.196 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:15.455 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=659d4493-38fc-4772-aadd-bf49c896cc15 00:06:15.455 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 659d4493-38fc-4772-aadd-bf49c896cc15 lvol 20 00:06:15.455 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c7377eb9-ed27-45db-94a9-b9821b35b3e2 00:06:15.455 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.713 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7377eb9-ed27-45db-94a9-b9821b35b3e2 00:06:15.972 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:16.231 [2024-11-19 17:24:18.214631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.231 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.231 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3302411 00:06:16.231 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:16.231 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:17.625 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c7377eb9-ed27-45db-94a9-b9821b35b3e2 MY_SNAPSHOT 00:06:17.625 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2e26990a-6c40-46f9-8a80-3d450caa4b04 00:06:17.625 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c7377eb9-ed27-45db-94a9-b9821b35b3e2 30 00:06:17.884 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2e26990a-6c40-46f9-8a80-3d450caa4b04 MY_CLONE 00:06:18.142 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=002ab3c5-a3cf-48d5-84ab-b6f7ceccbfda 00:06:18.142 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 002ab3c5-a3cf-48d5-84ab-b6f7ceccbfda 00:06:18.709 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3302411 00:06:26.828 Initializing NVMe Controllers 00:06:26.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.828 Controller IO queue size 128, less than required. 00:06:26.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:26.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:26.828 Initialization complete. Launching workers. 00:06:26.828 ======================================================== 00:06:26.828 Latency(us) 00:06:26.828 Device Information : IOPS MiB/s Average min max 00:06:26.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11561.50 45.16 11079.07 1607.24 63114.05 00:06:26.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11676.00 45.61 10968.15 477.70 67573.86 00:06:26.828 ======================================================== 00:06:26.828 Total : 23237.50 90.77 11023.34 477.70 67573.86 00:06:26.828 00:06:26.828 17:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:27.087 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7377eb9-ed27-45db-94a9-b9821b35b3e2 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 659d4493-38fc-4772-aadd-bf49c896cc15 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:27.346 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:27.346 rmmod nvme_tcp 00:06:27.604 rmmod nvme_fabrics 00:06:27.604 rmmod nvme_keyring 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3301948 ']' 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3301948 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3301948 ']' 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3301948 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3301948 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3301948' 00:06:27.604 killing process with pid 3301948 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3301948 00:06:27.604 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3301948 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.864 17:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.770 00:06:29.770 real 0m22.106s 00:06:29.770 user 1m3.647s 00:06:29.770 sys 0m7.708s 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.770 ************************************ 00:06:29.770 END TEST nvmf_lvol 00:06:29.770 ************************************ 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.770 17:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.029 ************************************ 00:06:30.029 START TEST nvmf_lvs_grow 00:06:30.029 ************************************ 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:30.029 * Looking for test storage... 00:06:30.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.029 --rc genhtml_branch_coverage=1 00:06:30.029 --rc genhtml_function_coverage=1 00:06:30.029 --rc genhtml_legend=1 00:06:30.029 --rc geninfo_all_blocks=1 00:06:30.029 --rc geninfo_unexecuted_blocks=1 00:06:30.029 00:06:30.029 ' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.029 --rc genhtml_branch_coverage=1 00:06:30.029 --rc genhtml_function_coverage=1 00:06:30.029 --rc genhtml_legend=1 00:06:30.029 --rc geninfo_all_blocks=1 00:06:30.029 --rc geninfo_unexecuted_blocks=1 00:06:30.029 00:06:30.029 ' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.029 --rc genhtml_branch_coverage=1 00:06:30.029 --rc genhtml_function_coverage=1 00:06:30.029 --rc genhtml_legend=1 00:06:30.029 --rc geninfo_all_blocks=1 00:06:30.029 --rc geninfo_unexecuted_blocks=1 00:06:30.029 00:06:30.029 ' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.029 --rc genhtml_branch_coverage=1 00:06:30.029 --rc genhtml_function_coverage=1 00:06:30.029 --rc genhtml_legend=1 00:06:30.029 --rc geninfo_all_blocks=1 00:06:30.029 --rc geninfo_unexecuted_blocks=1 00:06:30.029 00:06:30.029 ' 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.029 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.030 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:36.607 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.607 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:36.607 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:36.608 Found net devices under 0000:86:00.0: cvl_0_0 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:36.608 Found net devices under 0000:86:00.1: cvl_0_1 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.608 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:06:36.608 00:06:36.608 --- 10.0.0.2 ping statistics --- 00:06:36.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.608 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:06:36.608 00:06:36.608 --- 10.0.0.1 ping statistics --- 00:06:36.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.608 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3307791 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3307791 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3307791 ']' 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:36.608 [2024-11-19 17:24:38.289978] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:36.608 [2024-11-19 17:24:38.290033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.608 [2024-11-19 17:24:38.369747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.608 [2024-11-19 17:24:38.412018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.608 [2024-11-19 17:24:38.412054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.608 [2024-11-19 17:24:38.412061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.608 [2024-11-19 17:24:38.412067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.608 [2024-11-19 17:24:38.412072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.608 [2024-11-19 17:24:38.412626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:36.608 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:36.609 [2024-11-19 17:24:38.713492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:36.609 ************************************ 00:06:36.609 START TEST lvs_grow_clean 00:06:36.609 ************************************ 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:36.609 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:36.870 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:36.870 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:37.128 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=740730f0-5a20-4920-9dc3-955d274856cf 00:06:37.128 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:37.128 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:37.387 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:37.387 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:37.387 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 740730f0-5a20-4920-9dc3-955d274856cf lvol 150 00:06:37.387 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c7be8fc6-6bdf-49c3-82af-f7bd090717c3 00:06:37.387 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:37.387 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:37.646 [2024-11-19 17:24:39.753893] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:37.646 [2024-11-19 17:24:39.753942] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:37.646 true 00:06:37.646 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:37.646 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:37.906 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:37.906 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:38.165 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7be8fc6-6bdf-49c3-82af-f7bd090717c3 00:06:38.165 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:38.424 [2024-11-19 17:24:40.532207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.424 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3308297 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3308297 /var/tmp/bdevperf.sock 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3308297 ']' 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:38.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:38.683 [2024-11-19 17:24:40.790569] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:38.683 [2024-11-19 17:24:40.790617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308297 ] 00:06:38.683 [2024-11-19 17:24:40.865246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.942 [2024-11-19 17:24:40.905939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.942 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.942 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:38.942 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:39.201 Nvme0n1 00:06:39.201 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:39.459 [ 00:06:39.459 { 00:06:39.459 "name": "Nvme0n1", 00:06:39.459 "aliases": [ 00:06:39.459 "c7be8fc6-6bdf-49c3-82af-f7bd090717c3" 00:06:39.459 ], 00:06:39.459 "product_name": "NVMe disk", 00:06:39.459 "block_size": 4096, 00:06:39.459 "num_blocks": 38912, 00:06:39.459 "uuid": "c7be8fc6-6bdf-49c3-82af-f7bd090717c3", 00:06:39.459 "numa_id": 1, 00:06:39.459 "assigned_rate_limits": { 00:06:39.459 "rw_ios_per_sec": 0, 00:06:39.459 "rw_mbytes_per_sec": 0, 00:06:39.459 "r_mbytes_per_sec": 0, 00:06:39.459 "w_mbytes_per_sec": 0 00:06:39.459 }, 00:06:39.459 "claimed": false, 00:06:39.459 "zoned": false, 00:06:39.459 "supported_io_types": { 00:06:39.459 "read": true, 00:06:39.459 "write": true, 00:06:39.459 "unmap": true, 00:06:39.459 "flush": true, 00:06:39.459 "reset": true, 00:06:39.459 "nvme_admin": true, 00:06:39.459 "nvme_io": true, 00:06:39.459 "nvme_io_md": false, 00:06:39.459 "write_zeroes": true, 00:06:39.459 "zcopy": false, 00:06:39.459 "get_zone_info": false, 00:06:39.459 "zone_management": false, 00:06:39.459 "zone_append": false, 00:06:39.459 "compare": true, 00:06:39.459 "compare_and_write": true, 00:06:39.459 "abort": true, 00:06:39.459 "seek_hole": false, 00:06:39.459 "seek_data": false, 00:06:39.459 "copy": true, 00:06:39.459 "nvme_iov_md": false 00:06:39.459 }, 00:06:39.459 "memory_domains": [ 00:06:39.459 { 00:06:39.459 "dma_device_id": "system", 00:06:39.459 "dma_device_type": 1 00:06:39.459 } 00:06:39.459 ], 00:06:39.459 "driver_specific": { 00:06:39.459 "nvme": [ 00:06:39.459 { 00:06:39.459 "trid": { 00:06:39.459 "trtype": "TCP", 00:06:39.460 "adrfam": "IPv4", 00:06:39.460 "traddr": "10.0.0.2", 00:06:39.460 "trsvcid": "4420", 00:06:39.460 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:39.460 }, 00:06:39.460 "ctrlr_data": { 00:06:39.460 "cntlid": 1, 00:06:39.460 "vendor_id": "0x8086", 00:06:39.460 "model_number": "SPDK bdev Controller", 00:06:39.460 "serial_number": "SPDK0", 00:06:39.460 "firmware_revision": "25.01", 00:06:39.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.460 "oacs": { 00:06:39.460 "security": 0, 00:06:39.460 "format": 0, 00:06:39.460 "firmware": 0, 00:06:39.460 "ns_manage": 0 00:06:39.460 }, 00:06:39.460 "multi_ctrlr": true, 00:06:39.460 "ana_reporting": false 00:06:39.460 }, 00:06:39.460 "vs": { 00:06:39.460 "nvme_version": "1.3" 00:06:39.460 }, 00:06:39.460 "ns_data": { 00:06:39.460 "id": 1, 00:06:39.460 "can_share": true 00:06:39.460 } 00:06:39.460 } 00:06:39.460 ], 00:06:39.460 "mp_policy": "active_passive" 00:06:39.460 } 00:06:39.460 } 00:06:39.460 ] 00:06:39.460 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3308527 00:06:39.460 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:39.460 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:39.719 Running I/O for 10 seconds... 00:06:40.656 Latency(us) 00:06:40.656 [2024-11-19T16:24:42.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:40.656 Nvme0n1 : 1.00 22572.00 88.17 0.00 0.00 0.00 0.00 0.00 00:06:40.656 [2024-11-19T16:24:42.879Z] =================================================================================================================== 00:06:40.656 [2024-11-19T16:24:42.879Z] Total : 22572.00 88.17 0.00 0.00 0.00 0.00 0.00 00:06:40.656 00:06:41.594 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:41.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.594 Nvme0n1 : 2.00 22747.00 88.86 0.00 0.00 0.00 0.00 0.00 00:06:41.594 [2024-11-19T16:24:43.817Z] =================================================================================================================== 00:06:41.594 [2024-11-19T16:24:43.817Z] Total : 22747.00 88.86 0.00 0.00 0.00 0.00 0.00 00:06:41.594 00:06:41.594 true 00:06:41.858 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:41.858 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:41.858 17:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:41.858 17:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:41.858 17:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3308527 00:06:42.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.520 Nvme0n1 : 3.00 22817.00 89.13 0.00 0.00 0.00 0.00 0.00 00:06:42.520 [2024-11-19T16:24:44.743Z] =================================================================================================================== 00:06:42.520 [2024-11-19T16:24:44.743Z] Total : 22817.00 89.13 0.00 0.00 0.00 0.00 0.00 00:06:42.520 00:06:43.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.512 Nvme0n1 : 4.00 22882.50 89.38 0.00 0.00 0.00 0.00 0.00 00:06:43.512 [2024-11-19T16:24:45.735Z] =================================================================================================================== 00:06:43.512 [2024-11-19T16:24:45.735Z] Total : 22882.50 89.38 0.00 0.00 0.00 0.00 0.00 00:06:43.512 00:06:44.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.890 Nvme0n1 : 5.00 22935.40 89.59 0.00 0.00 0.00 0.00 0.00 00:06:44.890 [2024-11-19T16:24:47.113Z] =================================================================================================================== 00:06:44.890 [2024-11-19T16:24:47.113Z] Total : 22935.40 89.59 0.00 0.00 0.00 0.00 0.00 00:06:44.890 00:06:45.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.828 Nvme0n1 : 6.00 22971.33 89.73 0.00 0.00 0.00 0.00 0.00 00:06:45.828 [2024-11-19T16:24:48.051Z] =================================================================================================================== 00:06:45.828 [2024-11-19T16:24:48.051Z] Total : 22971.33 89.73 0.00 0.00 0.00 0.00 0.00 00:06:45.828 00:06:46.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.764 Nvme0n1 : 7.00 23007.86 89.87 0.00 0.00 0.00 0.00 0.00 00:06:46.764 [2024-11-19T16:24:48.987Z] =================================================================================================================== 00:06:46.764 [2024-11-19T16:24:48.987Z] Total : 23007.86 89.87 0.00 0.00 0.00 0.00 0.00 00:06:46.764 00:06:47.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.701 Nvme0n1 : 8.00 23026.50 89.95 0.00 0.00 0.00 0.00 0.00 00:06:47.701 [2024-11-19T16:24:49.924Z] =================================================================================================================== 00:06:47.701 [2024-11-19T16:24:49.924Z] Total : 23026.50 89.95 0.00 0.00 0.00 0.00 0.00 00:06:47.701 00:06:48.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.638 Nvme0n1 : 9.00 23034.00 89.98 0.00 0.00 0.00 0.00 0.00 00:06:48.638 [2024-11-19T16:24:50.861Z] =================================================================================================================== 00:06:48.638 [2024-11-19T16:24:50.861Z] Total : 23034.00 89.98 0.00 0.00 0.00 0.00 0.00 00:06:48.638 00:06:49.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.574 Nvme0n1 : 10.00 23056.60 90.06 0.00 0.00 0.00 0.00 0.00 00:06:49.574 [2024-11-19T16:24:51.797Z] =================================================================================================================== 00:06:49.574 [2024-11-19T16:24:51.797Z] Total : 23056.60 90.06 0.00 0.00 0.00 0.00 0.00 00:06:49.574 00:06:49.574 00:06:49.574 Latency(us) 00:06:49.574 [2024-11-19T16:24:51.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.574 Nvme0n1 : 10.01 23055.23 90.06 0.00 0.00 5549.09 3248.31 14360.93 00:06:49.574 [2024-11-19T16:24:51.797Z] =================================================================================================================== 00:06:49.574 [2024-11-19T16:24:51.797Z] Total : 23055.23 90.06 0.00 0.00 5549.09 3248.31 14360.93 00:06:49.574 { 00:06:49.574 "results": [ 00:06:49.574 { 00:06:49.574 "job": "Nvme0n1", 00:06:49.574 "core_mask": "0x2", 00:06:49.574 "workload": "randwrite", 00:06:49.574 "status": "finished", 00:06:49.574 "queue_depth": 128, 00:06:49.574 "io_size": 4096, 00:06:49.574 "runtime": 10.006148, 00:06:49.574 "iops": 23055.22564727206, 00:06:49.574 "mibps": 90.05947518465648, 00:06:49.574 "io_failed": 0, 00:06:49.574 "io_timeout": 0, 00:06:49.574 "avg_latency_us": 5549.094675672385, 00:06:49.574 "min_latency_us": 3248.3060869565215, 00:06:49.574 "max_latency_us": 14360.932173913043 00:06:49.574 } 00:06:49.574 ], 00:06:49.574 "core_count": 1 00:06:49.574 } 00:06:49.574 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3308297 00:06:49.574 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3308297 ']' 00:06:49.574 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3308297 00:06:49.574 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:49.574 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.574 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3308297 00:06:49.833 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:49.833 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:49.833 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3308297' 00:06:49.833 killing process with pid 3308297 00:06:49.833 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3308297 00:06:49.833 Received shutdown signal, test time was about 10.000000 seconds 00:06:49.833 00:06:49.833 Latency(us) 00:06:49.833 [2024-11-19T16:24:52.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.833 [2024-11-19T16:24:52.056Z] =================================================================================================================== 00:06:49.833 [2024-11-19T16:24:52.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:49.833 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3308297 00:06:49.833 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.092 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:50.351 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:50.351 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:50.610 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:50.610 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:50.610 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:50.610 [2024-11-19 17:24:52.760412] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:50.611 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:50.870 request: 00:06:50.870 { 00:06:50.870 "uuid": "740730f0-5a20-4920-9dc3-955d274856cf", 00:06:50.870 "method": "bdev_lvol_get_lvstores", 00:06:50.870 "req_id": 1 00:06:50.870 } 00:06:50.870 Got JSON-RPC error response 00:06:50.870 response: 00:06:50.870 { 00:06:50.870 "code": -19, 00:06:50.870 "message": "No such device" 00:06:50.870 } 00:06:50.870 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:50.870 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.870 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.870 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.870 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:51.129 aio_bdev 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c7be8fc6-6bdf-49c3-82af-f7bd090717c3 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c7be8fc6-6bdf-49c3-82af-f7bd090717c3 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.129 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:51.389 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c7be8fc6-6bdf-49c3-82af-f7bd090717c3 -t 2000 00:06:51.389 [ 00:06:51.389 { 00:06:51.389 "name": "c7be8fc6-6bdf-49c3-82af-f7bd090717c3", 00:06:51.389 "aliases": [ 00:06:51.389 "lvs/lvol" 00:06:51.389 ], 00:06:51.389 "product_name": "Logical Volume", 00:06:51.389 "block_size": 4096, 00:06:51.389 "num_blocks": 38912, 00:06:51.389 "uuid": "c7be8fc6-6bdf-49c3-82af-f7bd090717c3", 00:06:51.389 "assigned_rate_limits": { 00:06:51.389 "rw_ios_per_sec": 0, 00:06:51.389 "rw_mbytes_per_sec": 0, 00:06:51.389 "r_mbytes_per_sec": 0, 00:06:51.389 "w_mbytes_per_sec": 0 00:06:51.389 }, 00:06:51.389 "claimed": false, 00:06:51.389 "zoned": false, 00:06:51.389 "supported_io_types": { 00:06:51.389 "read": true, 00:06:51.389 "write": true, 00:06:51.389 "unmap": true, 00:06:51.389 "flush": false, 00:06:51.389 "reset": true, 00:06:51.389 "nvme_admin": false, 00:06:51.389 "nvme_io": false, 00:06:51.389 "nvme_io_md": false, 00:06:51.389 "write_zeroes": true, 00:06:51.389 "zcopy": false, 00:06:51.389 "get_zone_info": false, 00:06:51.389 "zone_management": false, 00:06:51.389 "zone_append": false, 00:06:51.389 "compare": false, 00:06:51.389 "compare_and_write": false, 00:06:51.389 "abort": false, 00:06:51.389 "seek_hole": true, 00:06:51.389 "seek_data": true, 00:06:51.389 "copy": false, 00:06:51.389 "nvme_iov_md": false 00:06:51.389 }, 00:06:51.389 "driver_specific": { 00:06:51.389 "lvol": { 00:06:51.389 "lvol_store_uuid": "740730f0-5a20-4920-9dc3-955d274856cf", 00:06:51.389 "base_bdev": "aio_bdev", 00:06:51.389 "thin_provision": false, 00:06:51.389 "num_allocated_clusters": 38, 00:06:51.389 "snapshot": false, 00:06:51.389 "clone": false, 00:06:51.389 "esnap_clone": false 00:06:51.389 } 00:06:51.389 } 00:06:51.389 } 00:06:51.389 ] 00:06:51.389 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:51.389 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:51.389 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:51.648 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:51.648 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:51.648 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:51.907 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:51.907 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7be8fc6-6bdf-49c3-82af-f7bd090717c3 00:06:51.907 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 740730f0-5a20-4920-9dc3-955d274856cf 00:06:52.166 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.425 00:06:52.425 real 0m15.729s 00:06:52.425 user 0m15.259s 00:06:52.425 sys 0m1.487s 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:52.425 ************************************ 00:06:52.425 END TEST lvs_grow_clean 00:06:52.425 ************************************ 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.425 ************************************ 00:06:52.425 START TEST lvs_grow_dirty 00:06:52.425 ************************************ 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.425 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:52.684 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:52.684 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:52.943 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:06:52.943 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:52.943 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:06:53.203 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:53.203 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:53.203 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf lvol 150 00:06:53.203 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7e21f18-7dfd-4897-9b8c-7206610c400a 00:06:53.203 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.203 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:53.462 [2024-11-19 17:24:55.558910] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:53.462 [2024-11-19 17:24:55.558963] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:53.462 true 00:06:53.462 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:06:53.462 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:53.720 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:53.720 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.720 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7e21f18-7dfd-4897-9b8c-7206610c400a 00:06:53.979 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:54.238 [2024-11-19 17:24:56.289068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.238 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3311086 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3311086 /var/tmp/bdevperf.sock 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3311086 ']' 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:54.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.498 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:54.498 [2024-11-19 17:24:56.525744] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:06:54.498 [2024-11-19 17:24:56.525789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311086 ] 00:06:54.498 [2024-11-19 17:24:56.598495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.498 [2024-11-19 17:24:56.641502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.758 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.758 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:54.758 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:54.758 Nvme0n1 00:06:55.017 17:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:55.017 [ 00:06:55.017 { 00:06:55.017 "name": "Nvme0n1", 00:06:55.017 "aliases": [ 00:06:55.017 "e7e21f18-7dfd-4897-9b8c-7206610c400a" 00:06:55.017 ], 00:06:55.017 "product_name": "NVMe disk", 00:06:55.017 "block_size": 4096, 00:06:55.017 "num_blocks": 38912, 00:06:55.017 "uuid": "e7e21f18-7dfd-4897-9b8c-7206610c400a", 00:06:55.017 "numa_id": 1, 00:06:55.017 "assigned_rate_limits": { 00:06:55.017 "rw_ios_per_sec": 0, 00:06:55.017 "rw_mbytes_per_sec": 0, 00:06:55.017 "r_mbytes_per_sec": 0, 00:06:55.017 "w_mbytes_per_sec": 0 00:06:55.017 }, 00:06:55.017 "claimed": false, 00:06:55.017 "zoned": false, 00:06:55.017 "supported_io_types": { 00:06:55.017 "read": true, 00:06:55.017 "write": true, 00:06:55.017 "unmap": true, 00:06:55.017 "flush": true, 00:06:55.017 "reset": true, 00:06:55.017 "nvme_admin": true, 00:06:55.017 "nvme_io": true, 00:06:55.017 "nvme_io_md": false, 00:06:55.017 "write_zeroes": true, 00:06:55.017 "zcopy": false, 00:06:55.017 "get_zone_info": false, 00:06:55.017 "zone_management": false, 00:06:55.017 "zone_append": false, 00:06:55.017 "compare": true, 00:06:55.017 "compare_and_write": true, 00:06:55.017 "abort": true, 00:06:55.017 "seek_hole": false, 00:06:55.017 "seek_data": false, 00:06:55.017 "copy": true, 00:06:55.017 "nvme_iov_md": false 00:06:55.017 }, 00:06:55.017 "memory_domains": [ 00:06:55.017 { 00:06:55.017 "dma_device_id": "system", 00:06:55.017 "dma_device_type": 1 00:06:55.017 } 00:06:55.017 ], 00:06:55.017 "driver_specific": { 00:06:55.017 "nvme": [ 00:06:55.017 { 00:06:55.017 "trid": { 00:06:55.017 "trtype": "TCP", 00:06:55.017 "adrfam": "IPv4", 00:06:55.017 "traddr": "10.0.0.2", 00:06:55.017 "trsvcid": "4420", 00:06:55.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:55.017 }, 00:06:55.017 "ctrlr_data": { 00:06:55.017 "cntlid": 1, 00:06:55.017 "vendor_id": "0x8086", 00:06:55.017 "model_number": "SPDK bdev Controller", 00:06:55.017 "serial_number": "SPDK0", 00:06:55.017 "firmware_revision": "25.01", 00:06:55.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.017 "oacs": { 00:06:55.017 "security": 0, 00:06:55.017 "format": 0, 00:06:55.017 "firmware": 0, 00:06:55.017 "ns_manage": 0 00:06:55.017 }, 00:06:55.017 "multi_ctrlr": true, 00:06:55.017 "ana_reporting": false 00:06:55.017 }, 00:06:55.017 "vs": { 00:06:55.017 "nvme_version": "1.3" 00:06:55.017 }, 00:06:55.017 "ns_data": { 00:06:55.017 "id": 1, 00:06:55.017 "can_share": true 00:06:55.017 } 00:06:55.017 } 00:06:55.017 ], 00:06:55.017 "mp_policy": "active_passive" 00:06:55.017 } 00:06:55.017 } 00:06:55.017 ] 00:06:55.017 17:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3311133 00:06:55.017 17:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:55.017 17:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:55.276 Running I/O for 10 seconds... 00:06:56.213 Latency(us) 00:06:56.213 [2024-11-19T16:24:58.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.213 Nvme0n1 : 1.00 22613.00 88.33 0.00 0.00 0.00 0.00 0.00 00:06:56.213 [2024-11-19T16:24:58.436Z] =================================================================================================================== 00:06:56.213 [2024-11-19T16:24:58.436Z] Total : 22613.00 88.33 0.00 0.00 0.00 0.00 0.00 00:06:56.213 00:06:57.157 17:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:06:57.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.157 Nvme0n1 : 2.00 22768.50 88.94 0.00 0.00 0.00 0.00 0.00 00:06:57.157 [2024-11-19T16:24:59.380Z] =================================================================================================================== 00:06:57.157 [2024-11-19T16:24:59.380Z] Total : 22768.50 88.94 0.00 0.00 0.00 0.00 0.00 00:06:57.157 00:06:57.416 true 00:06:57.416 17:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:06:57.416 17:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:57.416 17:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:57.416 17:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:57.416 17:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3311133 00:06:58.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.351 Nvme0n1 : 3.00 22763.67 88.92 0.00 0.00 0.00 0.00 0.00 00:06:58.351 [2024-11-19T16:25:00.574Z] =================================================================================================================== 00:06:58.351 [2024-11-19T16:25:00.574Z] Total : 22763.67 88.92 0.00 0.00 0.00 0.00 0.00 00:06:58.351 00:06:59.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.288 Nvme0n1 : 4.00 22764.50 88.92 0.00 0.00 0.00 0.00 0.00 00:06:59.288 [2024-11-19T16:25:01.511Z] =================================================================================================================== 00:06:59.288 [2024-11-19T16:25:01.511Z] Total : 22764.50 88.92 0.00 0.00 0.00 0.00 0.00 00:06:59.288 00:07:00.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.225 Nvme0n1 : 5.00 22821.60 89.15 0.00 0.00 0.00 0.00 0.00 00:07:00.225 [2024-11-19T16:25:02.448Z] =================================================================================================================== 00:07:00.225 [2024-11-19T16:25:02.448Z] Total : 22821.60 89.15 0.00 0.00 0.00 0.00 0.00 00:07:00.225 00:07:01.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.162 Nvme0n1 : 6.00 22875.00 89.36 0.00 0.00 0.00 0.00 0.00 00:07:01.162 [2024-11-19T16:25:03.385Z] =================================================================================================================== 00:07:01.162 [2024-11-19T16:25:03.385Z] Total : 22875.00 89.36 0.00 0.00 0.00 0.00 0.00 00:07:01.162 00:07:02.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.099 Nvme0n1 : 7.00 22906.00 89.48 0.00 0.00 0.00 0.00 0.00 00:07:02.099 [2024-11-19T16:25:04.322Z] =================================================================================================================== 00:07:02.099 [2024-11-19T16:25:04.322Z] Total : 22906.00 89.48 0.00 0.00 0.00 0.00 0.00 00:07:02.099 00:07:03.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.475 Nvme0n1 : 8.00 22943.75 89.62 0.00 0.00 0.00 0.00 0.00 00:07:03.475 [2024-11-19T16:25:05.698Z] =================================================================================================================== 00:07:03.475 [2024-11-19T16:25:05.698Z] Total : 22943.75 89.62 0.00 0.00 0.00 0.00 0.00 00:07:03.475 00:07:04.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.411 Nvme0n1 : 9.00 22973.44 89.74 0.00 0.00 0.00 0.00 0.00 00:07:04.411 [2024-11-19T16:25:06.634Z] =================================================================================================================== 00:07:04.411 [2024-11-19T16:25:06.634Z] Total : 22973.44 89.74 0.00 0.00 0.00 0.00 0.00 00:07:04.411 00:07:05.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.347 Nvme0n1 : 10.00 22989.90 89.80 0.00 0.00 0.00 0.00 0.00 00:07:05.347 [2024-11-19T16:25:07.570Z] =================================================================================================================== 00:07:05.347 [2024-11-19T16:25:07.570Z] Total : 22989.90 89.80 0.00 0.00 0.00 0.00 0.00 00:07:05.347 00:07:05.347 00:07:05.347 Latency(us) 00:07:05.347 [2024-11-19T16:25:07.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.347 Nvme0n1 : 10.00 22993.95 89.82 0.00 0.00 5563.75 3191.32 13620.09 00:07:05.347 [2024-11-19T16:25:07.570Z] =================================================================================================================== 00:07:05.347 [2024-11-19T16:25:07.570Z] Total : 22993.95 89.82 0.00 0.00 5563.75 3191.32 13620.09 00:07:05.347 { 00:07:05.347 "results": [ 00:07:05.347 { 00:07:05.347 "job": "Nvme0n1", 00:07:05.347 "core_mask": "0x2", 00:07:05.347 "workload": "randwrite", 00:07:05.347 "status": "finished", 00:07:05.347 "queue_depth": 128, 00:07:05.347 "io_size": 4096, 00:07:05.347 "runtime": 10.003806, 00:07:05.347 "iops": 22993.94850319968, 00:07:05.347 "mibps": 89.82011134062375, 00:07:05.347 "io_failed": 0, 00:07:05.347 "io_timeout": 0, 00:07:05.347 "avg_latency_us": 5563.751249617011, 00:07:05.347 "min_latency_us": 3191.318260869565, 00:07:05.347 "max_latency_us": 13620.090434782609 00:07:05.347 } 00:07:05.347 ], 00:07:05.347 "core_count": 1 00:07:05.347 } 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3311086 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3311086 ']' 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3311086 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3311086 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3311086' 00:07:05.348 killing process with pid 3311086 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3311086 00:07:05.348 Received shutdown signal, test time was about 10.000000 seconds 00:07:05.348 00:07:05.348 Latency(us) 00:07:05.348 [2024-11-19T16:25:07.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.348 [2024-11-19T16:25:07.571Z] =================================================================================================================== 00:07:05.348 [2024-11-19T16:25:07.571Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3311086 00:07:05.348 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.607 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.866 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:05.866 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3307791 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3307791 00:07:06.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3307791 Killed "${NVMF_APP[@]}" "$@" 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3312981 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3312981 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3312981 ']' 00:07:06.125 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.126 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.126 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.126 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.126 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:06.126 [2024-11-19 17:25:08.218520] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:06.126 [2024-11-19 17:25:08.218567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.126 [2024-11-19 17:25:08.295281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.126 [2024-11-19 17:25:08.336225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.126 [2024-11-19 17:25:08.336261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.126 [2024-11-19 17:25:08.336268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.126 [2024-11-19 17:25:08.336274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.126 [2024-11-19 17:25:08.336279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.126 [2024-11-19 17:25:08.336813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.385 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.644 [2024-11-19 17:25:08.630921] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:06.644 [2024-11-19 17:25:08.631012] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:06.644 [2024-11-19 17:25:08.631040] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e7e21f18-7dfd-4897-9b8c-7206610c400a 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e7e21f18-7dfd-4897-9b8c-7206610c400a 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:06.644 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7e21f18-7dfd-4897-9b8c-7206610c400a -t 2000 00:07:06.904 [ 00:07:06.904 { 00:07:06.904 "name": "e7e21f18-7dfd-4897-9b8c-7206610c400a", 00:07:06.904 "aliases": [ 00:07:06.904 "lvs/lvol" 00:07:06.904 ], 00:07:06.904 "product_name": "Logical Volume", 00:07:06.904 "block_size": 4096, 00:07:06.904 "num_blocks": 38912, 00:07:06.904 "uuid": "e7e21f18-7dfd-4897-9b8c-7206610c400a", 00:07:06.904 "assigned_rate_limits": { 00:07:06.904 "rw_ios_per_sec": 0, 00:07:06.904 "rw_mbytes_per_sec": 0, 00:07:06.904 "r_mbytes_per_sec": 0, 00:07:06.904 "w_mbytes_per_sec": 0 00:07:06.904 }, 00:07:06.904 "claimed": false, 00:07:06.904 "zoned": false, 00:07:06.904 "supported_io_types": { 00:07:06.904 "read": true, 00:07:06.904 "write": true, 00:07:06.904 "unmap": true, 00:07:06.904 "flush": false, 00:07:06.904 "reset": true, 00:07:06.904 "nvme_admin": false, 00:07:06.904 "nvme_io": false, 00:07:06.904 "nvme_io_md": false, 00:07:06.904 "write_zeroes": true, 00:07:06.904 "zcopy": false, 00:07:06.904 "get_zone_info": false, 00:07:06.904 "zone_management": false, 00:07:06.904 "zone_append": false, 00:07:06.904 "compare": false, 00:07:06.904 "compare_and_write": false, 00:07:06.904 "abort": false, 00:07:06.904 "seek_hole": true, 00:07:06.904 "seek_data": true, 00:07:06.904 "copy": false, 00:07:06.904 "nvme_iov_md": false 00:07:06.904 }, 00:07:06.904 "driver_specific": { 00:07:06.904 "lvol": { 00:07:06.904 "lvol_store_uuid": "182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf", 00:07:06.904 "base_bdev": "aio_bdev", 00:07:06.904 "thin_provision": false, 00:07:06.904 "num_allocated_clusters": 38, 00:07:06.904 "snapshot": false, 00:07:06.904 "clone": false, 00:07:06.904 "esnap_clone": false 00:07:06.904 } 00:07:06.904 } 00:07:06.904 } 00:07:06.904 ] 00:07:06.904 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:06.904 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:06.904 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:07.163 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:07.163 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:07.163 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:07.422 [2024-11-19 17:25:09.591969] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.422 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:07.681 request: 00:07:07.681 { 00:07:07.681 "uuid": "182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf", 00:07:07.681 "method": "bdev_lvol_get_lvstores", 00:07:07.681 "req_id": 1 00:07:07.681 } 00:07:07.681 Got JSON-RPC error response 00:07:07.681 response: 00:07:07.681 { 00:07:07.681 "code": -19, 00:07:07.681 "message": "No such device" 00:07:07.681 } 00:07:07.681 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:07.681 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.681 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.681 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.681 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.940 aio_bdev 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7e21f18-7dfd-4897-9b8c-7206610c400a 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e7e21f18-7dfd-4897-9b8c-7206610c400a 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:07.940 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:08.199 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7e21f18-7dfd-4897-9b8c-7206610c400a -t 2000 00:07:08.199 [ 00:07:08.199 { 00:07:08.199 "name": "e7e21f18-7dfd-4897-9b8c-7206610c400a", 00:07:08.199 "aliases": [ 00:07:08.199 "lvs/lvol" 00:07:08.199 ], 00:07:08.199 "product_name": "Logical Volume", 00:07:08.199 "block_size": 4096, 00:07:08.199 "num_blocks": 38912, 00:07:08.199 "uuid": "e7e21f18-7dfd-4897-9b8c-7206610c400a", 00:07:08.199 "assigned_rate_limits": { 00:07:08.199 "rw_ios_per_sec": 0, 00:07:08.199 "rw_mbytes_per_sec": 0, 00:07:08.199 "r_mbytes_per_sec": 0, 00:07:08.199 "w_mbytes_per_sec": 0 00:07:08.199 }, 00:07:08.199 "claimed": false, 00:07:08.199 "zoned": false, 00:07:08.199 "supported_io_types": { 00:07:08.199 "read": true, 00:07:08.199 "write": true, 00:07:08.199 "unmap": true, 00:07:08.199 "flush": false, 00:07:08.199 "reset": true, 00:07:08.199 "nvme_admin": false, 00:07:08.199 "nvme_io": false, 00:07:08.199 "nvme_io_md": false, 00:07:08.199 "write_zeroes": true, 00:07:08.199 "zcopy": false, 00:07:08.199 "get_zone_info": false, 00:07:08.199 "zone_management": false, 00:07:08.199 "zone_append": false, 00:07:08.199 "compare": false, 00:07:08.199 "compare_and_write": false, 00:07:08.199 "abort": false, 00:07:08.199 "seek_hole": true, 00:07:08.199 "seek_data": true, 00:07:08.199 "copy": false, 00:07:08.199 "nvme_iov_md": false 00:07:08.199 }, 00:07:08.199 "driver_specific": { 00:07:08.200 "lvol": { 00:07:08.200 "lvol_store_uuid": "182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf", 00:07:08.200 "base_bdev": "aio_bdev", 00:07:08.200 "thin_provision": false, 00:07:08.200 "num_allocated_clusters": 38, 00:07:08.200 "snapshot": false, 00:07:08.200 "clone": false, 00:07:08.200 "esnap_clone": false 00:07:08.200 } 00:07:08.200 } 00:07:08.200 } 00:07:08.200 ] 00:07:08.200 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:08.200 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:08.200 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:08.459 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:08.459 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:08.459 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:08.718 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:08.718 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7e21f18-7dfd-4897-9b8c-7206610c400a 00:07:08.978 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 182ad3d6-08c5-4c3d-b22f-c8cd33ca6abf 00:07:08.978 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.237 00:07:09.237 real 0m16.825s 00:07:09.237 user 0m43.580s 00:07:09.237 sys 0m3.889s 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.237 ************************************ 00:07:09.237 END TEST lvs_grow_dirty 00:07:09.237 ************************************ 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:09.237 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:09.237 nvmf_trace.0 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.496 rmmod nvme_tcp 00:07:09.496 rmmod nvme_fabrics 00:07:09.496 rmmod nvme_keyring 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3312981 ']' 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3312981 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3312981 ']' 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3312981 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3312981 00:07:09.496 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.497 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.497 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3312981' 00:07:09.497 killing process with pid 3312981 00:07:09.497 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3312981 00:07:09.497 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3312981 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.756 17:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.661 17:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.661 00:07:11.661 real 0m41.827s 00:07:11.661 user 1m4.457s 00:07:11.661 sys 0m10.382s 00:07:11.661 17:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.661 17:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.661 ************************************ 00:07:11.661 END TEST nvmf_lvs_grow 00:07:11.662 ************************************ 00:07:11.662 17:25:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:11.662 17:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.662 17:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.662 17:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.922 ************************************ 00:07:11.922 START TEST nvmf_bdev_io_wait 00:07:11.922 ************************************ 00:07:11.922 17:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:11.922 * Looking for test storage... 00:07:11.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.922 --rc genhtml_branch_coverage=1 00:07:11.922 --rc genhtml_function_coverage=1 00:07:11.922 --rc genhtml_legend=1 00:07:11.922 --rc geninfo_all_blocks=1 00:07:11.922 --rc geninfo_unexecuted_blocks=1 00:07:11.922 00:07:11.922 ' 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.922 --rc genhtml_branch_coverage=1 00:07:11.922 --rc genhtml_function_coverage=1 00:07:11.922 --rc genhtml_legend=1 00:07:11.922 --rc geninfo_all_blocks=1 00:07:11.922 --rc geninfo_unexecuted_blocks=1 00:07:11.922 00:07:11.922 ' 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.922 --rc genhtml_branch_coverage=1 00:07:11.922 --rc genhtml_function_coverage=1 00:07:11.922 --rc genhtml_legend=1 00:07:11.922 --rc geninfo_all_blocks=1 00:07:11.922 --rc geninfo_unexecuted_blocks=1 00:07:11.922 00:07:11.922 ' 00:07:11.922 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.922 --rc genhtml_branch_coverage=1 00:07:11.922 --rc genhtml_function_coverage=1 00:07:11.922 --rc genhtml_legend=1 00:07:11.922 --rc geninfo_all_blocks=1 00:07:11.922 --rc geninfo_unexecuted_blocks=1 00:07:11.923 00:07:11.923 ' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.923 17:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.495 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:18.496 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:18.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:18.496 Found net devices under 0000:86:00.0: cvl_0_0 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:18.496 Found net devices under 0000:86:00.1: cvl_0_1 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.496 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:07:18.496 00:07:18.496 --- 10.0.0.2 ping statistics --- 00:07:18.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.496 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:07:18.496 00:07:18.496 --- 10.0.0.1 ping statistics --- 00:07:18.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.496 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.496 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3317216 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3317216 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3317216 ']' 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 [2024-11-19 17:25:20.167935] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:18.497 [2024-11-19 17:25:20.167988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.497 [2024-11-19 17:25:20.248388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.497 [2024-11-19 17:25:20.293059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.497 [2024-11-19 17:25:20.293097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.497 [2024-11-19 17:25:20.293105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.497 [2024-11-19 17:25:20.293111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.497 [2024-11-19 17:25:20.293116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.497 [2024-11-19 17:25:20.294749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.497 [2024-11-19 17:25:20.294858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.497 [2024-11-19 17:25:20.294977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.497 [2024-11-19 17:25:20.294978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 [2024-11-19 17:25:20.427489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 Malloc0 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 [2024-11-19 17:25:20.483080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3317289 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3317291 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.497 { 00:07:18.497 "params": { 00:07:18.497 "name": "Nvme$subsystem", 00:07:18.497 "trtype": "$TEST_TRANSPORT", 00:07:18.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.497 "adrfam": "ipv4", 00:07:18.497 "trsvcid": "$NVMF_PORT", 00:07:18.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.497 "hdgst": ${hdgst:-false}, 00:07:18.497 "ddgst": ${ddgst:-false} 00:07:18.497 }, 00:07:18.497 "method": "bdev_nvme_attach_controller" 00:07:18.497 } 00:07:18.497 EOF 00:07:18.497 )") 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3317293 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:18.497 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.497 { 00:07:18.497 "params": { 00:07:18.497 "name": "Nvme$subsystem", 00:07:18.497 "trtype": "$TEST_TRANSPORT", 00:07:18.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.497 "adrfam": "ipv4", 00:07:18.497 "trsvcid": "$NVMF_PORT", 00:07:18.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.497 "hdgst": ${hdgst:-false}, 00:07:18.497 "ddgst": ${ddgst:-false} 00:07:18.497 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 } 00:07:18.498 EOF 00:07:18.498 )") 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3317296 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.498 { 00:07:18.498 "params": { 00:07:18.498 "name": "Nvme$subsystem", 00:07:18.498 "trtype": "$TEST_TRANSPORT", 00:07:18.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.498 "adrfam": "ipv4", 00:07:18.498 "trsvcid": "$NVMF_PORT", 00:07:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.498 "hdgst": ${hdgst:-false}, 00:07:18.498 "ddgst": ${ddgst:-false} 00:07:18.498 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 } 00:07:18.498 EOF 00:07:18.498 )") 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.498 { 00:07:18.498 "params": { 00:07:18.498 "name": "Nvme$subsystem", 00:07:18.498 "trtype": "$TEST_TRANSPORT", 00:07:18.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.498 "adrfam": "ipv4", 00:07:18.498 "trsvcid": "$NVMF_PORT", 00:07:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.498 "hdgst": ${hdgst:-false}, 00:07:18.498 "ddgst": ${ddgst:-false} 00:07:18.498 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 } 00:07:18.498 EOF 00:07:18.498 )") 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3317289 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.498 "params": { 00:07:18.498 "name": "Nvme1", 00:07:18.498 "trtype": "tcp", 00:07:18.498 "traddr": "10.0.0.2", 00:07:18.498 "adrfam": "ipv4", 00:07:18.498 "trsvcid": "4420", 00:07:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.498 "hdgst": false, 00:07:18.498 "ddgst": false 00:07:18.498 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 }' 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.498 "params": { 00:07:18.498 "name": "Nvme1", 00:07:18.498 "trtype": "tcp", 00:07:18.498 "traddr": "10.0.0.2", 00:07:18.498 "adrfam": "ipv4", 00:07:18.498 "trsvcid": "4420", 00:07:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.498 "hdgst": false, 00:07:18.498 "ddgst": false 00:07:18.498 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 }' 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.498 "params": { 00:07:18.498 "name": "Nvme1", 00:07:18.498 "trtype": "tcp", 00:07:18.498 "traddr": "10.0.0.2", 00:07:18.498 "adrfam": "ipv4", 00:07:18.498 "trsvcid": "4420", 00:07:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.498 "hdgst": false, 00:07:18.498 "ddgst": false 00:07:18.498 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 }' 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.498 17:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.498 "params": { 00:07:18.498 "name": "Nvme1", 00:07:18.498 "trtype": "tcp", 00:07:18.498 "traddr": "10.0.0.2", 00:07:18.498 "adrfam": "ipv4", 00:07:18.498 "trsvcid": "4420", 00:07:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.498 "hdgst": false, 00:07:18.498 "ddgst": false 00:07:18.498 }, 00:07:18.498 "method": "bdev_nvme_attach_controller" 00:07:18.498 }' 00:07:18.498 [2024-11-19 17:25:20.533191] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:18.498 [2024-11-19 17:25:20.533241] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:18.498 [2024-11-19 17:25:20.534858] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:18.498 [2024-11-19 17:25:20.534897] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:18.498 [2024-11-19 17:25:20.535717] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:18.498 [2024-11-19 17:25:20.535756] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:18.498 [2024-11-19 17:25:20.538820] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:18.498 [2024-11-19 17:25:20.538866] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:18.757 [2024-11-19 17:25:20.721775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.757 [2024-11-19 17:25:20.764529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.757 [2024-11-19 17:25:20.814285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.757 [2024-11-19 17:25:20.857890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:18.757 [2024-11-19 17:25:20.918545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.757 [2024-11-19 17:25:20.975982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.016 [2024-11-19 17:25:20.980058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:19.016 [2024-11-19 17:25:21.019170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:19.016 Running I/O for 1 seconds... 00:07:19.016 Running I/O for 1 seconds... 00:07:19.016 Running I/O for 1 seconds... 00:07:19.016 Running I/O for 1 seconds... 00:07:19.952 8377.00 IOPS, 32.72 MiB/s 00:07:19.952 Latency(us) 00:07:19.952 [2024-11-19T16:25:22.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.952 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:19.952 Nvme1n1 : 1.02 8349.80 32.62 0.00 0.00 15154.48 5955.23 27126.21 00:07:19.952 [2024-11-19T16:25:22.175Z] =================================================================================================================== 00:07:19.952 [2024-11-19T16:25:22.175Z] Total : 8349.80 32.62 0.00 0.00 15154.48 5955.23 27126.21 00:07:19.952 11201.00 IOPS, 43.75 MiB/s 00:07:19.952 Latency(us) 00:07:19.952 [2024-11-19T16:25:22.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.952 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:19.952 Nvme1n1 : 1.01 11238.97 43.90 0.00 0.00 11341.22 7009.50 22111.28 00:07:19.952 [2024-11-19T16:25:22.175Z] =================================================================================================================== 00:07:19.952 [2024-11-19T16:25:22.175Z] Total : 11238.97 43.90 0.00 0.00 11341.22 7009.50 22111.28 00:07:20.211 8187.00 IOPS, 31.98 MiB/s 00:07:20.211 Latency(us) 00:07:20.211 [2024-11-19T16:25:22.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.211 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:20.211 Nvme1n1 : 1.01 8308.59 32.46 0.00 0.00 15372.39 3590.23 35788.35 00:07:20.211 [2024-11-19T16:25:22.434Z] =================================================================================================================== 00:07:20.211 [2024-11-19T16:25:22.434Z] Total : 8308.59 32.46 0.00 0.00 15372.39 3590.23 35788.35 00:07:20.211 247200.00 IOPS, 965.62 MiB/s 00:07:20.211 Latency(us) 00:07:20.211 [2024-11-19T16:25:22.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.211 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:20.211 Nvme1n1 : 1.00 246814.58 964.12 0.00 0.00 516.44 231.51 1538.67 00:07:20.211 [2024-11-19T16:25:22.434Z] =================================================================================================================== 00:07:20.211 [2024-11-19T16:25:22.434Z] Total : 246814.58 964.12 0.00 0.00 516.44 231.51 1538.67 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3317291 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3317293 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3317296 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:20.211 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.212 rmmod nvme_tcp 00:07:20.212 rmmod nvme_fabrics 00:07:20.212 rmmod nvme_keyring 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3317216 ']' 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3317216 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3317216 ']' 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3317216 00:07:20.212 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3317216 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3317216' 00:07:20.471 killing process with pid 3317216 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3317216 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3317216 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.471 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:23.008 00:07:23.008 real 0m10.784s 00:07:23.008 user 0m16.284s 00:07:23.008 sys 0m6.093s 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.008 ************************************ 00:07:23.008 END TEST nvmf_bdev_io_wait 00:07:23.008 ************************************ 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.008 ************************************ 00:07:23.008 START TEST nvmf_queue_depth 00:07:23.008 ************************************ 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:23.008 * Looking for test storage... 00:07:23.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.008 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.009 --rc genhtml_branch_coverage=1 00:07:23.009 --rc genhtml_function_coverage=1 00:07:23.009 --rc genhtml_legend=1 00:07:23.009 --rc geninfo_all_blocks=1 00:07:23.009 --rc geninfo_unexecuted_blocks=1 00:07:23.009 00:07:23.009 ' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.009 --rc genhtml_branch_coverage=1 00:07:23.009 --rc genhtml_function_coverage=1 00:07:23.009 --rc genhtml_legend=1 00:07:23.009 --rc geninfo_all_blocks=1 00:07:23.009 --rc geninfo_unexecuted_blocks=1 00:07:23.009 00:07:23.009 ' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.009 --rc genhtml_branch_coverage=1 00:07:23.009 --rc genhtml_function_coverage=1 00:07:23.009 --rc genhtml_legend=1 00:07:23.009 --rc geninfo_all_blocks=1 00:07:23.009 --rc geninfo_unexecuted_blocks=1 00:07:23.009 00:07:23.009 ' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.009 --rc genhtml_branch_coverage=1 00:07:23.009 --rc genhtml_function_coverage=1 00:07:23.009 --rc genhtml_legend=1 00:07:23.009 --rc geninfo_all_blocks=1 00:07:23.009 --rc geninfo_unexecuted_blocks=1 00:07:23.009 00:07:23.009 ' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.009 17:25:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:29.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:29.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.582 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:29.582 Found net devices under 0000:86:00.0: cvl_0_0 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:29.583 Found net devices under 0000:86:00.1: cvl_0_1 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:07:29.583 00:07:29.583 --- 10.0.0.2 ping statistics --- 00:07:29.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.583 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:07:29.583 00:07:29.583 --- 10.0.0.1 ping statistics --- 00:07:29.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.583 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3321096 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3321096 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3321096 ']' 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.583 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 [2024-11-19 17:25:31.032248] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:29.583 [2024-11-19 17:25:31.032296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.583 [2024-11-19 17:25:31.114261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.583 [2024-11-19 17:25:31.155060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.583 [2024-11-19 17:25:31.155097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.583 [2024-11-19 17:25:31.155104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.583 [2024-11-19 17:25:31.155111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.583 [2024-11-19 17:25:31.155116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.583 [2024-11-19 17:25:31.155683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 [2024-11-19 17:25:31.303175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 Malloc0 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.583 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.584 [2024-11-19 17:25:31.353569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3321266 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3321266 /var/tmp/bdevperf.sock 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3321266 ']' 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:29.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.584 [2024-11-19 17:25:31.402628] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:29.584 [2024-11-19 17:25:31.402669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321266 ] 00:07:29.584 [2024-11-19 17:25:31.475456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.584 [2024-11-19 17:25:31.518013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.584 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:29.847 NVMe0n1 00:07:29.847 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.847 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.847 Running I/O for 10 seconds... 00:07:31.726 11312.00 IOPS, 44.19 MiB/s [2024-11-19T16:25:35.043Z] 11876.00 IOPS, 46.39 MiB/s [2024-11-19T16:25:35.981Z] 12006.00 IOPS, 46.90 MiB/s [2024-11-19T16:25:37.360Z] 12119.25 IOPS, 47.34 MiB/s [2024-11-19T16:25:38.296Z] 12248.00 IOPS, 47.84 MiB/s [2024-11-19T16:25:39.232Z] 12271.83 IOPS, 47.94 MiB/s [2024-11-19T16:25:40.171Z] 12277.86 IOPS, 47.96 MiB/s [2024-11-19T16:25:41.109Z] 12278.62 IOPS, 47.96 MiB/s [2024-11-19T16:25:42.045Z] 12277.33 IOPS, 47.96 MiB/s [2024-11-19T16:25:42.045Z] 12293.30 IOPS, 48.02 MiB/s 00:07:39.822 Latency(us) 00:07:39.822 [2024-11-19T16:25:42.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.822 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:39.822 Verification LBA range: start 0x0 length 0x4000 00:07:39.822 NVMe0n1 : 10.05 12329.51 48.16 0.00 0.00 82759.57 13107.20 51516.99 00:07:39.822 [2024-11-19T16:25:42.045Z] =================================================================================================================== 00:07:39.822 [2024-11-19T16:25:42.045Z] Total : 12329.51 48.16 0.00 0.00 82759.57 13107.20 51516.99 00:07:39.822 { 00:07:39.822 "results": [ 00:07:39.822 { 00:07:39.822 "job": "NVMe0n1", 00:07:39.822 "core_mask": "0x1", 00:07:39.822 "workload": "verify", 00:07:39.822 "status": "finished", 00:07:39.822 "verify_range": { 00:07:39.822 "start": 0, 00:07:39.822 "length": 16384 00:07:39.822 }, 00:07:39.822 "queue_depth": 1024, 00:07:39.822 "io_size": 4096, 00:07:39.822 "runtime": 10.053683, 00:07:39.822 "iops": 12329.51148350311, 00:07:39.822 "mibps": 48.16215423243403, 00:07:39.822 "io_failed": 0, 00:07:39.822 "io_timeout": 0, 00:07:39.822 "avg_latency_us": 82759.5736064154, 00:07:39.822 "min_latency_us": 13107.2, 00:07:39.822 "max_latency_us": 51516.994782608694 00:07:39.822 } 00:07:39.822 ], 00:07:39.822 "core_count": 1 00:07:39.822 } 00:07:39.822 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3321266 00:07:39.822 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3321266 ']' 00:07:39.822 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3321266 00:07:39.822 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:39.822 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.822 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3321266 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3321266' 00:07:40.080 killing process with pid 3321266 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3321266 00:07:40.080 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.080 00:07:40.080 Latency(us) 00:07:40.080 [2024-11-19T16:25:42.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.080 [2024-11-19T16:25:42.303Z] =================================================================================================================== 00:07:40.080 [2024-11-19T16:25:42.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3321266 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.080 rmmod nvme_tcp 00:07:40.080 rmmod nvme_fabrics 00:07:40.080 rmmod nvme_keyring 00:07:40.080 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3321096 ']' 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3321096 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3321096 ']' 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3321096 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3321096 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3321096' 00:07:40.340 killing process with pid 3321096 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3321096 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3321096 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.340 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.879 00:07:42.879 real 0m19.829s 00:07:42.879 user 0m23.310s 00:07:42.879 sys 0m6.026s 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.879 ************************************ 00:07:42.879 END TEST nvmf_queue_depth 00:07:42.879 ************************************ 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.879 ************************************ 00:07:42.879 START TEST nvmf_target_multipath 00:07:42.879 ************************************ 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:42.879 * Looking for test storage... 00:07:42.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.879 --rc genhtml_branch_coverage=1 00:07:42.879 --rc genhtml_function_coverage=1 00:07:42.879 --rc genhtml_legend=1 00:07:42.879 --rc geninfo_all_blocks=1 00:07:42.879 --rc geninfo_unexecuted_blocks=1 00:07:42.879 00:07:42.879 ' 00:07:42.879 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.879 --rc genhtml_branch_coverage=1 00:07:42.879 --rc genhtml_function_coverage=1 00:07:42.880 --rc genhtml_legend=1 00:07:42.880 --rc geninfo_all_blocks=1 00:07:42.880 --rc geninfo_unexecuted_blocks=1 00:07:42.880 00:07:42.880 ' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.880 --rc genhtml_branch_coverage=1 00:07:42.880 --rc genhtml_function_coverage=1 00:07:42.880 --rc genhtml_legend=1 00:07:42.880 --rc geninfo_all_blocks=1 00:07:42.880 --rc geninfo_unexecuted_blocks=1 00:07:42.880 00:07:42.880 ' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.880 --rc genhtml_branch_coverage=1 00:07:42.880 --rc genhtml_function_coverage=1 00:07:42.880 --rc genhtml_legend=1 00:07:42.880 --rc geninfo_all_blocks=1 00:07:42.880 --rc geninfo_unexecuted_blocks=1 00:07:42.880 00:07:42.880 ' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.880 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:49.473 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:49.473 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.473 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:49.474 Found net devices under 0000:86:00.0: cvl_0_0 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:49.474 Found net devices under 0000:86:00.1: cvl_0_1 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:07:49.474 00:07:49.474 --- 10.0.0.2 ping statistics --- 00:07:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.474 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:49.474 00:07:49.474 --- 10.0.0.1 ping statistics --- 00:07:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.474 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:49.474 only one NIC for nvmf test 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:49.474 rmmod nvme_tcp 00:07:49.474 rmmod nvme_fabrics 00:07:49.474 rmmod nvme_keyring 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.474 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.475 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.856 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.856 00:07:50.856 real 0m8.387s 00:07:50.856 user 0m1.873s 00:07:50.856 sys 0m4.531s 00:07:50.857 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.857 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:50.857 ************************************ 00:07:50.857 END TEST nvmf_target_multipath 00:07:50.857 ************************************ 00:07:51.116 17:25:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.117 ************************************ 00:07:51.117 START TEST nvmf_zcopy 00:07:51.117 ************************************ 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:51.117 * Looking for test storage... 00:07:51.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.117 --rc genhtml_branch_coverage=1 00:07:51.117 --rc genhtml_function_coverage=1 00:07:51.117 --rc genhtml_legend=1 00:07:51.117 --rc geninfo_all_blocks=1 00:07:51.117 --rc geninfo_unexecuted_blocks=1 00:07:51.117 00:07:51.117 ' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.117 --rc genhtml_branch_coverage=1 00:07:51.117 --rc genhtml_function_coverage=1 00:07:51.117 --rc genhtml_legend=1 00:07:51.117 --rc geninfo_all_blocks=1 00:07:51.117 --rc geninfo_unexecuted_blocks=1 00:07:51.117 00:07:51.117 ' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.117 --rc genhtml_branch_coverage=1 00:07:51.117 --rc genhtml_function_coverage=1 00:07:51.117 --rc genhtml_legend=1 00:07:51.117 --rc geninfo_all_blocks=1 00:07:51.117 --rc geninfo_unexecuted_blocks=1 00:07:51.117 00:07:51.117 ' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.117 --rc genhtml_branch_coverage=1 00:07:51.117 --rc genhtml_function_coverage=1 00:07:51.117 --rc genhtml_legend=1 00:07:51.117 --rc geninfo_all_blocks=1 00:07:51.117 --rc geninfo_unexecuted_blocks=1 00:07:51.117 00:07:51.117 ' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.117 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.377 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:57.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:57.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:57.951 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:57.952 Found net devices under 0000:86:00.0: cvl_0_0 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:57.952 Found net devices under 0000:86:00.1: cvl_0_1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:57.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:07:57.952 00:07:57.952 --- 10.0.0.2 ping statistics --- 00:07:57.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.952 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:07:57.952 00:07:57.952 --- 10.0.0.1 ping statistics --- 00:07:57.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.952 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3330114 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3330114 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3330114 ']' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 [2024-11-19 17:25:59.396381] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:57.952 [2024-11-19 17:25:59.396435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.952 [2024-11-19 17:25:59.477281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.952 [2024-11-19 17:25:59.518171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.952 [2024-11-19 17:25:59.518208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.952 [2024-11-19 17:25:59.518215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.952 [2024-11-19 17:25:59.518221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.952 [2024-11-19 17:25:59.518226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.952 [2024-11-19 17:25:59.518803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 [2024-11-19 17:25:59.649439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.952 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.953 [2024-11-19 17:25:59.669623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.953 malloc0 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.953 { 00:07:57.953 "params": { 00:07:57.953 "name": "Nvme$subsystem", 00:07:57.953 "trtype": "$TEST_TRANSPORT", 00:07:57.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.953 "adrfam": "ipv4", 00:07:57.953 "trsvcid": "$NVMF_PORT", 00:07:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.953 "hdgst": ${hdgst:-false}, 00:07:57.953 "ddgst": ${ddgst:-false} 00:07:57.953 }, 00:07:57.953 "method": "bdev_nvme_attach_controller" 00:07:57.953 } 00:07:57.953 EOF 00:07:57.953 )") 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:57.953 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.953 "params": { 00:07:57.953 "name": "Nvme1", 00:07:57.953 "trtype": "tcp", 00:07:57.953 "traddr": "10.0.0.2", 00:07:57.953 "adrfam": "ipv4", 00:07:57.953 "trsvcid": "4420", 00:07:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.953 "hdgst": false, 00:07:57.953 "ddgst": false 00:07:57.953 }, 00:07:57.953 "method": "bdev_nvme_attach_controller" 00:07:57.953 }' 00:07:57.953 [2024-11-19 17:25:59.755740] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:07:57.953 [2024-11-19 17:25:59.755788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330268 ] 00:07:57.953 [2024-11-19 17:25:59.831606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.953 [2024-11-19 17:25:59.872972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.953 Running I/O for 10 seconds... 00:07:59.827 8312.00 IOPS, 64.94 MiB/s [2024-11-19T16:26:03.427Z] 8449.00 IOPS, 66.01 MiB/s [2024-11-19T16:26:04.364Z] 8487.33 IOPS, 66.31 MiB/s [2024-11-19T16:26:05.301Z] 8512.75 IOPS, 66.51 MiB/s [2024-11-19T16:26:06.238Z] 8520.80 IOPS, 66.57 MiB/s [2024-11-19T16:26:07.175Z] 8525.50 IOPS, 66.61 MiB/s [2024-11-19T16:26:08.112Z] 8530.29 IOPS, 66.64 MiB/s [2024-11-19T16:26:09.490Z] 8538.62 IOPS, 66.71 MiB/s [2024-11-19T16:26:10.058Z] 8539.00 IOPS, 66.71 MiB/s [2024-11-19T16:26:10.318Z] 8537.90 IOPS, 66.70 MiB/s 00:08:08.095 Latency(us) 00:08:08.095 [2024-11-19T16:26:10.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.095 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:08.095 Verification LBA range: start 0x0 length 0x1000 00:08:08.095 Nvme1n1 : 10.01 8537.36 66.70 0.00 0.00 14950.36 2464.72 25644.52 00:08:08.095 [2024-11-19T16:26:10.318Z] =================================================================================================================== 00:08:08.095 [2024-11-19T16:26:10.318Z] Total : 8537.36 66.70 0.00 0.00 14950.36 2464.72 25644.52 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3331886 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:08.095 { 00:08:08.095 "params": { 00:08:08.095 "name": "Nvme$subsystem", 00:08:08.095 "trtype": "$TEST_TRANSPORT", 00:08:08.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.095 "adrfam": "ipv4", 00:08:08.095 "trsvcid": "$NVMF_PORT", 00:08:08.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.095 "hdgst": ${hdgst:-false}, 00:08:08.095 "ddgst": ${ddgst:-false} 00:08:08.095 }, 00:08:08.095 "method": "bdev_nvme_attach_controller" 00:08:08.095 } 00:08:08.095 EOF 00:08:08.095 )") 00:08:08.095 [2024-11-19 17:26:10.225917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:08.095 [2024-11-19 17:26:10.225953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:08.095 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:08.095 "params": { 00:08:08.095 "name": "Nvme1", 00:08:08.095 "trtype": "tcp", 00:08:08.095 "traddr": "10.0.0.2", 00:08:08.095 "adrfam": "ipv4", 00:08:08.095 "trsvcid": "4420", 00:08:08.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.095 "hdgst": false, 00:08:08.095 "ddgst": false 00:08:08.095 }, 00:08:08.095 "method": "bdev_nvme_attach_controller" 00:08:08.095 }' 00:08:08.095 [2024-11-19 17:26:10.237916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.237929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 [2024-11-19 17:26:10.249946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.249961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 [2024-11-19 17:26:10.261983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.261993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 [2024-11-19 17:26:10.265937] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:08:08.095 [2024-11-19 17:26:10.265984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331886 ] 00:08:08.095 [2024-11-19 17:26:10.274009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.274020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 [2024-11-19 17:26:10.286037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.286048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 [2024-11-19 17:26:10.298073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.298085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.095 [2024-11-19 17:26:10.310104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.095 [2024-11-19 17:26:10.310117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.322134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.322146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.334169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.334180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.340735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.355 [2024-11-19 17:26:10.346198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.346210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.358234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.358250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.370267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.370279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.382299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.382311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.382400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.355 [2024-11-19 17:26:10.394337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.394354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.406368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.406387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.418403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.418426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.430427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.430440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.442459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.442473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.454489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.454501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.466534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.466553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.478568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.478585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.490602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.490617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.502633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.502648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.514661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.514674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.526689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.526700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.538724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.538735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.550759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.550773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.562792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.562803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.355 [2024-11-19 17:26:10.574826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.355 [2024-11-19 17:26:10.574836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.586854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.586866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.598895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.598910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.610925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.610936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.622962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.622973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.634996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.635008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.647031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.647054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 Running I/O for 5 seconds... 00:08:08.615 [2024-11-19 17:26:10.659060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.659072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.671958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.671978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.686955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.686975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.701475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.701494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.712900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.712919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.727350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.727369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.741710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.741729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.755464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.755483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.770392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.770411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.785722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.785742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.800445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.800464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.615 [2024-11-19 17:26:10.816182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.615 [2024-11-19 17:26:10.816202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.616 [2024-11-19 17:26:10.830570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.616 [2024-11-19 17:26:10.830589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.841692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.841712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.856123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.856143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.870097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.870117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.884518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.884538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.895466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.895485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.904872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.904891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.919521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.919540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.933224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.933244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.947825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.947843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.963836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.963855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.974721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.974741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:10.989490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:10.989509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.000767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.000785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.010493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.010512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.025178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.025197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.039079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.039100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.053291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.053310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.067769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.067787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.875 [2024-11-19 17:26:11.081959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.875 [2024-11-19 17:26:11.081978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.876 [2024-11-19 17:26:11.093356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.876 [2024-11-19 17:26:11.093374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.107987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.108006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.121332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.121353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.135524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.135542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.149574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.149593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.163199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.163218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.177099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.177118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.191178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.191197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.205222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.205242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.215437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.215457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.229600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.229618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.243412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.243432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.257770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.257789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.271909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.271927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.285715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.285734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.299767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.299785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.313716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.313735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.327714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.327732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.135 [2024-11-19 17:26:11.341658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.135 [2024-11-19 17:26:11.341676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.136 [2024-11-19 17:26:11.356004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.136 [2024-11-19 17:26:11.356025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.370265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.370285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.383850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.383869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.398200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.398219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.408866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.408885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.418555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.418574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.433351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.433370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.443940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.443965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.458673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.458692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.470331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.470349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.484392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.484411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.498377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.498396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.508167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.508185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.522877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.522897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.533806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.533825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.542925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.542943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.557521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.557539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.571879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.395 [2024-11-19 17:26:11.571897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.395 [2024-11-19 17:26:11.583013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.396 [2024-11-19 17:26:11.583047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.396 [2024-11-19 17:26:11.597718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.396 [2024-11-19 17:26:11.597736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.396 [2024-11-19 17:26:11.611692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.396 [2024-11-19 17:26:11.611711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.625764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.625784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.639707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.639726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.654214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.654233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 16368.00 IOPS, 127.88 MiB/s [2024-11-19T16:26:11.878Z] [2024-11-19 17:26:11.669644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.669664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.683735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.683755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.697654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.697675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.711943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.711971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.722478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.722498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.737154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.737174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.748477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.748497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.762895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.762914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.777121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.777140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.791438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.791457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.805300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.805320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.819430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.819448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.834142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.834161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.849039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.849058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.863731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.863751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.655 [2024-11-19 17:26:11.875011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.655 [2024-11-19 17:26:11.875031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.889481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.889502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.903340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.903361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.917557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.917582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.931529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.931549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.945915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.945934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.956856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.956875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.970825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.970845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.984398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.984417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:11.998440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.915 [2024-11-19 17:26:11.998459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.915 [2024-11-19 17:26:12.009280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.009298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.023739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.023758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.038032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.038052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.051972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.051991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.065915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.065934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.080201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.080220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.091080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.091098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.106175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.106193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.121741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.121760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.916 [2024-11-19 17:26:12.136286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.916 [2024-11-19 17:26:12.136305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.150387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.150407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.164598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.164617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.178403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.178427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.192128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.192147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.206193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.206212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.220306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.220324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.234138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.234157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.248139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.248158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.262171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.262190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.276581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.276600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.288431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.288450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.303064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.303083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.313953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.313971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.328412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.328431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.341236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.341255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.355274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.355293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.368329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.368348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.382422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.382443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.176 [2024-11-19 17:26:12.396471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.176 [2024-11-19 17:26:12.396489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.407277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.407298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.421428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.421447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.435695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.435718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.450668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.450686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.464995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.465013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.479323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.479342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.490264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.490283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.504221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.504239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.518342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.518362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.532051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.532069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.545848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.545867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.559851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.559870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.574290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.574308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.584918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.584955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.599080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.599099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.613248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.613266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.627203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.627222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.641625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.641645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.436 [2024-11-19 17:26:12.652547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.436 [2024-11-19 17:26:12.652566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 16501.50 IOPS, 128.92 MiB/s [2024-11-19T16:26:12.919Z] [2024-11-19 17:26:12.666951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.666971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.680836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.680855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.694792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.694811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.708823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.708842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.719712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.719730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.734384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.734403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.745169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.745188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.759393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.759412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.773498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.773517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.784153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.784172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.798753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.798772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.812262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.812281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.826682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.826700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.837374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.837393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.851737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.851756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.865835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.865854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.879842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.879862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.894106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.894126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.696 [2024-11-19 17:26:12.907913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.696 [2024-11-19 17:26:12.907932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.922205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.922226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.936226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.936246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.950316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.950336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.960827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.960846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.975631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.975649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.986232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.986251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:12.995896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:12.995915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.962 [2024-11-19 17:26:13.010196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.962 [2024-11-19 17:26:13.010214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.023824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.023843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.038375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.038394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.052492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.052511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.066480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.066500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.080509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.080529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.094760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.094779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.105726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.105745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.120370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.120389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.133933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.133960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.147463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.147483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.161258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.161277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.963 [2024-11-19 17:26:13.174922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.963 [2024-11-19 17:26:13.174941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.189196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.189216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.203527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.203547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.217255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.217275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.231577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.231596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.242641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.242660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.256656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.256676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.270627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.270646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.284975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.284995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.299055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.299074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.313040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.313060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.326733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.326751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.341192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.341211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.352131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.352151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.366521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.366540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.380658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.380677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.394840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.394860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.409099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.409118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.419681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.419700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.223 [2024-11-19 17:26:13.434110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.223 [2024-11-19 17:26:13.434128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.448109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.448135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.462264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.462283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.476283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.476303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.490077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.490096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.504160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.504179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.518419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.518438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.532390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.532408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.546735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.546754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.561087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.561106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.575260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.575278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.589297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.589315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.600636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.600654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.610208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.610226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.624640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.624658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.638692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.638711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.483 [2024-11-19 17:26:13.653015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.483 [2024-11-19 17:26:13.653034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.484 [2024-11-19 17:26:13.663648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.484 [2024-11-19 17:26:13.663666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.484 16523.00 IOPS, 129.09 MiB/s [2024-11-19T16:26:13.707Z] [2024-11-19 17:26:13.677914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.484 [2024-11-19 17:26:13.677932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.484 [2024-11-19 17:26:13.691712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.484 [2024-11-19 17:26:13.691730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.705761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.705785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.719525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.719543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.733617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.733635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.748027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.748046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.761710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.761729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.775843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.775861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.789555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.789573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.803427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.803446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.817418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.817438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.830894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.830913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.845122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.845141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.859332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.859351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.870377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.870395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.884852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.884872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.898799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.898818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.909511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.909529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.924065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.924083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.938128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.938146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.744 [2024-11-19 17:26:13.952152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.744 [2024-11-19 17:26:13.952172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:13.966327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:13.966352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:13.980590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:13.980608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:13.994739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:13.994757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.008545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.008563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.022549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.022567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.036546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.036564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.050564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.050582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.064597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.064616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.078905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.078923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.093134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.093154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.107067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.107086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.121324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.121342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.132499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.132517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.146635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.146652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.160687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.160705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.174727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.174746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.188956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.188974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.202996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.203014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.005 [2024-11-19 17:26:14.217632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.005 [2024-11-19 17:26:14.217651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.228449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.228470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.242858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.242878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.257036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.257056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.271365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.271383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.282945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.282972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.297303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.297321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.311319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.311338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.325319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.325337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.339431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.339449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.353370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.353389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.367323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.367341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.381297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.381315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.395145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.395164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.409654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.409673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.424797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.424817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.438734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.264 [2024-11-19 17:26:14.438753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.264 [2024-11-19 17:26:14.452467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.265 [2024-11-19 17:26:14.452487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.265 [2024-11-19 17:26:14.466263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.265 [2024-11-19 17:26:14.466284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.265 [2024-11-19 17:26:14.480827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.265 [2024-11-19 17:26:14.480847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.524 [2024-11-19 17:26:14.491626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.524 [2024-11-19 17:26:14.491645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.524 [2024-11-19 17:26:14.506499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.524 [2024-11-19 17:26:14.506519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.524 [2024-11-19 17:26:14.516929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.524 [2024-11-19 17:26:14.516955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.524 [2024-11-19 17:26:14.531270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.524 [2024-11-19 17:26:14.531289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.524 [2024-11-19 17:26:14.545010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.545029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.558982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.559001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.572699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.572718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.586977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.586997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.601070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.601089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.615160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.615180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.629422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.629443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.643527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.643546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.657142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.657161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 16562.00 IOPS, 129.39 MiB/s [2024-11-19T16:26:14.748Z] [2024-11-19 17:26:14.671937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.671965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.682687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.682705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.696872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.696891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.710385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.710404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.724504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.724524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.525 [2024-11-19 17:26:14.738590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.525 [2024-11-19 17:26:14.738614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.753144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.753165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.768359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.768378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.782694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.782713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.796747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.796767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.810796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.810814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.824479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.824498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.838657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.838675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.852642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.852661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.866979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.866999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.878035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.878053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.892971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.892989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.903704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.903722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.918207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.918225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.932457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.932476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.946616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.946635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.960272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.960290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.974785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.974804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:14.990595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:14.990615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.785 [2024-11-19 17:26:15.005486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.785 [2024-11-19 17:26:15.005509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.045 [2024-11-19 17:26:15.020882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.045 [2024-11-19 17:26:15.020901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.045 [2024-11-19 17:26:15.035062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.045 [2024-11-19 17:26:15.035080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.045 [2024-11-19 17:26:15.048734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.045 [2024-11-19 17:26:15.048752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.062618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.062636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.076478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.076497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.090749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.090767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.101963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.101981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.116891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.116908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.132439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.132457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.146914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.146932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.162468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.162486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.176920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.176938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.191036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.191055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.202254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.202272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.217137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.217155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.228020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.228038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.242402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.242420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.046 [2024-11-19 17:26:15.256916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.046 [2024-11-19 17:26:15.256934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.272797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.272821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.286680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.286699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.300903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.300922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.314775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.314793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.328869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.328887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.340261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.340279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.354800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.354817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.368888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.368906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.383000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.383018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.397804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.397821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.413170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.413188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.427487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.427505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.441271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.441289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.455181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.455198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.469398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.469416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.483145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.483164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.497464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.497483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.511693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.511711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.306 [2024-11-19 17:26:15.522797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.306 [2024-11-19 17:26:15.522815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.537584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.537611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.552633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.552652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.566737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.566755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.580581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.580600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.594630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.594649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.608801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.608821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.619345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.619364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.633873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.633892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.648082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.648100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 [2024-11-19 17:26:15.662070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.566 [2024-11-19 17:26:15.662089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.566 16560.40 IOPS, 129.38 MiB/s [2024-11-19T16:26:15.790Z] [2024-11-19 17:26:15.675046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.675065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 00:08:13.567 Latency(us) 00:08:13.567 [2024-11-19T16:26:15.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.567 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:13.567 Nvme1n1 : 5.01 16561.19 129.38 0.00 0.00 7721.14 3732.70 19831.76 00:08:13.567 [2024-11-19T16:26:15.790Z] =================================================================================================================== 00:08:13.567 [2024-11-19T16:26:15.790Z] Total : 16561.19 129.38 0.00 0.00 7721.14 3732.70 19831.76 00:08:13.567 [2024-11-19 17:26:15.684261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.684278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.696288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.696301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.708333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.708352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.720354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.720370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.732387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.732402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.744415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.744429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.756448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.756462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.768481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.768495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.567 [2024-11-19 17:26:15.780512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.567 [2024-11-19 17:26:15.780525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.827 [2024-11-19 17:26:15.792543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.827 [2024-11-19 17:26:15.792554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.827 [2024-11-19 17:26:15.804580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.827 [2024-11-19 17:26:15.804593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.827 [2024-11-19 17:26:15.816608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.827 [2024-11-19 17:26:15.816620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.827 [2024-11-19 17:26:15.828643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.827 [2024-11-19 17:26:15.828654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3331886) - No such process 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3331886 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.827 delay0 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.827 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:13.827 [2024-11-19 17:26:16.018086] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:20.395 [2024-11-19 17:26:22.197405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38ad0 is same with the state(6) to be set 00:08:20.395 Initializing NVMe Controllers 00:08:20.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:20.395 Initialization complete. Launching workers. 00:08:20.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 111 00:08:20.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 398, failed to submit 33 00:08:20.395 success 205, unsuccessful 193, failed 0 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:20.395 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.396 rmmod nvme_tcp 00:08:20.396 rmmod nvme_fabrics 00:08:20.396 rmmod nvme_keyring 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3330114 ']' 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3330114 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3330114 ']' 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3330114 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3330114 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3330114' 00:08:20.396 killing process with pid 3330114 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3330114 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3330114 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.396 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.938 00:08:22.938 real 0m31.414s 00:08:22.938 user 0m41.988s 00:08:22.938 sys 0m11.071s 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.938 ************************************ 00:08:22.938 END TEST nvmf_zcopy 00:08:22.938 ************************************ 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.938 ************************************ 00:08:22.938 START TEST nvmf_nmic 00:08:22.938 ************************************ 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:22.938 * Looking for test storage... 00:08:22.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:22.938 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.939 --rc genhtml_branch_coverage=1 00:08:22.939 --rc genhtml_function_coverage=1 00:08:22.939 --rc genhtml_legend=1 00:08:22.939 --rc geninfo_all_blocks=1 00:08:22.939 --rc geninfo_unexecuted_blocks=1 00:08:22.939 00:08:22.939 ' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.939 --rc genhtml_branch_coverage=1 00:08:22.939 --rc genhtml_function_coverage=1 00:08:22.939 --rc genhtml_legend=1 00:08:22.939 --rc geninfo_all_blocks=1 00:08:22.939 --rc geninfo_unexecuted_blocks=1 00:08:22.939 00:08:22.939 ' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.939 --rc genhtml_branch_coverage=1 00:08:22.939 --rc genhtml_function_coverage=1 00:08:22.939 --rc genhtml_legend=1 00:08:22.939 --rc geninfo_all_blocks=1 00:08:22.939 --rc geninfo_unexecuted_blocks=1 00:08:22.939 00:08:22.939 ' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.939 --rc genhtml_branch_coverage=1 00:08:22.939 --rc genhtml_function_coverage=1 00:08:22.939 --rc genhtml_legend=1 00:08:22.939 --rc geninfo_all_blocks=1 00:08:22.939 --rc geninfo_unexecuted_blocks=1 00:08:22.939 00:08:22.939 ' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.939 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.521 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.521 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:08:29.521 00:08:29.521 --- 10.0.0.2 ping statistics --- 00:08:29.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.521 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:08:29.521 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:08:29.521 00:08:29.521 --- 10.0.0.1 ping statistics --- 00:08:29.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.521 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3337481 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3337481 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3337481 ']' 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.522 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.522 [2024-11-19 17:26:30.891506] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:08:29.522 [2024-11-19 17:26:30.891553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.522 [2024-11-19 17:26:30.971994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.522 [2024-11-19 17:26:31.015756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.522 [2024-11-19 17:26:31.015795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.522 [2024-11-19 17:26:31.015804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.522 [2024-11-19 17:26:31.015812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.522 [2024-11-19 17:26:31.015818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.522 [2024-11-19 17:26:31.017442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.522 [2024-11-19 17:26:31.017474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.522 [2024-11-19 17:26:31.017578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.522 [2024-11-19 17:26:31.017579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 [2024-11-19 17:26:31.785298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 Malloc0 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 [2024-11-19 17:26:31.862430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:29.782 test case1: single bdev can't be used in multiple subsystems 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 [2024-11-19 17:26:31.894350] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:29.782 [2024-11-19 17:26:31.894372] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:29.782 [2024-11-19 17:26:31.894380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.782 request: 00:08:29.782 { 00:08:29.782 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:29.782 "namespace": { 00:08:29.782 "bdev_name": "Malloc0", 00:08:29.782 "no_auto_visible": false 00:08:29.782 }, 00:08:29.782 "method": "nvmf_subsystem_add_ns", 00:08:29.782 "req_id": 1 00:08:29.782 } 00:08:29.782 Got JSON-RPC error response 00:08:29.782 response: 00:08:29.782 { 00:08:29.782 "code": -32602, 00:08:29.782 "message": "Invalid parameters" 00:08:29.782 } 00:08:29.782 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:29.783 Adding namespace failed - expected result. 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:29.783 test case2: host connect to nvmf target in multiple paths 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.783 [2024-11-19 17:26:31.906502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.783 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:31.230 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:32.201 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.201 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:32.201 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.201 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:32.201 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:34.112 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:34.383 [global] 00:08:34.383 thread=1 00:08:34.383 invalidate=1 00:08:34.383 rw=write 00:08:34.383 time_based=1 00:08:34.383 runtime=1 00:08:34.384 ioengine=libaio 00:08:34.384 direct=1 00:08:34.384 bs=4096 00:08:34.384 iodepth=1 00:08:34.384 norandommap=0 00:08:34.384 numjobs=1 00:08:34.384 00:08:34.384 verify_dump=1 00:08:34.384 verify_backlog=512 00:08:34.384 verify_state_save=0 00:08:34.384 do_verify=1 00:08:34.384 verify=crc32c-intel 00:08:34.384 [job0] 00:08:34.384 filename=/dev/nvme0n1 00:08:34.384 Could not set queue depth (nvme0n1) 00:08:34.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:34.643 fio-3.35 00:08:34.643 Starting 1 thread 00:08:35.582 00:08:35.582 job0: (groupid=0, jobs=1): err= 0: pid=3338573: Tue Nov 19 17:26:37 2024 00:08:35.582 read: IOPS=514, BW=2058KiB/s (2108kB/s)(2116KiB/1028msec) 00:08:35.582 slat (nsec): min=6407, max=23418, avg=7837.98, stdev=2857.16 00:08:35.582 clat (usec): min=156, max=42141, avg=1659.19, stdev=7675.08 00:08:35.582 lat (usec): min=164, max=42151, avg=1667.03, stdev=7677.63 00:08:35.582 clat percentiles (usec): 00:08:35.582 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:08:35.582 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:08:35.582 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 223], 00:08:35.582 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:35.582 | 99.99th=[42206] 00:08:35.582 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:08:35.582 slat (nsec): min=9048, max=40216, avg=9970.79, stdev=1377.45 00:08:35.582 clat (usec): min=108, max=267, avg=128.85, stdev=10.50 00:08:35.582 lat (usec): min=118, max=308, avg=138.82, stdev=10.98 00:08:35.582 clat percentiles (usec): 00:08:35.582 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 123], 00:08:35.582 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 130], 00:08:35.582 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 137], 95.00th=[ 143], 00:08:35.582 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 247], 99.95th=[ 269], 00:08:35.582 | 99.99th=[ 269] 00:08:35.582 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:08:35.582 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:35.582 lat (usec) : 250=98.65%, 500=0.13% 00:08:35.582 lat (msec) : 50=1.22% 00:08:35.582 cpu : usr=0.58%, sys=1.46%, ctx=1553, majf=0, minf=1 00:08:35.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:35.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.582 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:35.582 00:08:35.582 Run status group 0 (all jobs): 00:08:35.582 READ: bw=2058KiB/s (2108kB/s), 2058KiB/s-2058KiB/s (2108kB/s-2108kB/s), io=2116KiB (2167kB), run=1028-1028msec 00:08:35.582 WRITE: bw=3984KiB/s (4080kB/s), 3984KiB/s-3984KiB/s (4080kB/s-4080kB/s), io=4096KiB (4194kB), run=1028-1028msec 00:08:35.582 00:08:35.582 Disk stats (read/write): 00:08:35.582 nvme0n1: ios=575/1024, merge=0/0, ticks=727/125, in_queue=852, util=91.28% 00:08:35.582 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:35.842 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.842 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.842 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.842 rmmod nvme_tcp 00:08:35.842 rmmod nvme_fabrics 00:08:36.102 rmmod nvme_keyring 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3337481 ']' 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3337481 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3337481 ']' 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3337481 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337481 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337481' 00:08:36.102 killing process with pid 3337481 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3337481 00:08:36.102 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3337481 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.362 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.273 00:08:38.273 real 0m15.787s 00:08:38.273 user 0m36.684s 00:08:38.273 sys 0m5.309s 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.273 ************************************ 00:08:38.273 END TEST nvmf_nmic 00:08:38.273 ************************************ 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.273 ************************************ 00:08:38.273 START TEST nvmf_fio_target 00:08:38.273 ************************************ 00:08:38.273 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:38.534 * Looking for test storage... 00:08:38.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.534 --rc genhtml_function_coverage=1 00:08:38.534 --rc genhtml_legend=1 00:08:38.534 --rc geninfo_all_blocks=1 00:08:38.534 --rc geninfo_unexecuted_blocks=1 00:08:38.534 00:08:38.534 ' 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.534 --rc genhtml_function_coverage=1 00:08:38.534 --rc genhtml_legend=1 00:08:38.534 --rc geninfo_all_blocks=1 00:08:38.534 --rc geninfo_unexecuted_blocks=1 00:08:38.534 00:08:38.534 ' 00:08:38.534 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.535 --rc genhtml_function_coverage=1 00:08:38.535 --rc genhtml_legend=1 00:08:38.535 --rc geninfo_all_blocks=1 00:08:38.535 --rc geninfo_unexecuted_blocks=1 00:08:38.535 00:08:38.535 ' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.535 --rc genhtml_branch_coverage=1 00:08:38.535 --rc genhtml_function_coverage=1 00:08:38.535 --rc genhtml_legend=1 00:08:38.535 --rc geninfo_all_blocks=1 00:08:38.535 --rc geninfo_unexecuted_blocks=1 00:08:38.535 00:08:38.535 ' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.535 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:45.117 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:45.117 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:45.117 Found net devices under 0000:86:00.0: cvl_0_0 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:45.117 Found net devices under 0000:86:00.1: cvl_0_1 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.117 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:08:45.118 00:08:45.118 --- 10.0.0.2 ping statistics --- 00:08:45.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.118 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:08:45.118 00:08:45.118 --- 10.0.0.1 ping statistics --- 00:08:45.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.118 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3342336 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3342336 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3342336 ']' 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.118 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.118 [2024-11-19 17:26:46.784361] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:08:45.118 [2024-11-19 17:26:46.784405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.118 [2024-11-19 17:26:46.862832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.118 [2024-11-19 17:26:46.906332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.118 [2024-11-19 17:26:46.906368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.118 [2024-11-19 17:26:46.906375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.118 [2024-11-19 17:26:46.906382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.118 [2024-11-19 17:26:46.906387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.118 [2024-11-19 17:26:46.907858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.118 [2024-11-19 17:26:46.907884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.118 [2024-11-19 17:26:46.907991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.118 [2024-11-19 17:26:46.907991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.118 [2024-11-19 17:26:47.226337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.118 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.378 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:45.378 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.637 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:45.637 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.897 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:45.897 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.156 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:46.156 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:46.156 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.416 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:46.416 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.676 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:46.676 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.935 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:46.935 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:47.194 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.194 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:47.194 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.453 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:47.453 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:47.713 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.972 [2024-11-19 17:26:49.947328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.972 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:47.972 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:48.231 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.610 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:49.610 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:49.610 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.610 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:49.610 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:49.610 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:51.517 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:51.517 [global] 00:08:51.517 thread=1 00:08:51.517 invalidate=1 00:08:51.517 rw=write 00:08:51.517 time_based=1 00:08:51.517 runtime=1 00:08:51.517 ioengine=libaio 00:08:51.517 direct=1 00:08:51.517 bs=4096 00:08:51.517 iodepth=1 00:08:51.517 norandommap=0 00:08:51.517 numjobs=1 00:08:51.517 00:08:51.517 verify_dump=1 00:08:51.517 verify_backlog=512 00:08:51.517 verify_state_save=0 00:08:51.517 do_verify=1 00:08:51.517 verify=crc32c-intel 00:08:51.517 [job0] 00:08:51.517 filename=/dev/nvme0n1 00:08:51.517 [job1] 00:08:51.517 filename=/dev/nvme0n2 00:08:51.517 [job2] 00:08:51.517 filename=/dev/nvme0n3 00:08:51.517 [job3] 00:08:51.517 filename=/dev/nvme0n4 00:08:51.517 Could not set queue depth (nvme0n1) 00:08:51.517 Could not set queue depth (nvme0n2) 00:08:51.517 Could not set queue depth (nvme0n3) 00:08:51.517 Could not set queue depth (nvme0n4) 00:08:51.777 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.777 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.777 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.777 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.777 fio-3.35 00:08:51.777 Starting 4 threads 00:08:53.202 00:08:53.202 job0: (groupid=0, jobs=1): err= 0: pid=3343759: Tue Nov 19 17:26:55 2024 00:08:53.202 read: IOPS=2368, BW=9475KiB/s (9702kB/s)(9484KiB/1001msec) 00:08:53.202 slat (nsec): min=7064, max=36712, avg=8852.16, stdev=1680.37 00:08:53.203 clat (usec): min=164, max=377, avg=215.96, stdev=23.58 00:08:53.203 lat (usec): min=173, max=386, avg=224.81, stdev=23.68 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:08:53.203 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:08:53.203 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 258], 00:08:53.203 | 99.00th=[ 277], 99.50th=[ 330], 99.90th=[ 363], 99.95th=[ 367], 00:08:53.203 | 99.99th=[ 379] 00:08:53.203 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:53.203 slat (nsec): min=10622, max=47035, avg=12823.87, stdev=2021.36 00:08:53.203 clat (usec): min=113, max=1267, avg=163.62, stdev=35.39 00:08:53.203 lat (usec): min=127, max=1281, avg=176.45, stdev=35.57 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 145], 00:08:53.203 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:08:53.203 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 237], 00:08:53.203 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 289], 00:08:53.203 | 99.99th=[ 1270] 00:08:53.203 bw ( KiB/s): min=12263, max=12263, per=48.63%, avg=12263.00, stdev= 0.00, samples=1 00:08:53.203 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:08:53.203 lat (usec) : 250=94.20%, 500=5.78% 00:08:53.203 lat (msec) : 2=0.02% 00:08:53.203 cpu : usr=3.30%, sys=9.20%, ctx=4933, majf=0, minf=1 00:08:53.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 issued rwts: total=2371,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.203 job1: (groupid=0, jobs=1): err= 0: pid=3343776: Tue Nov 19 17:26:55 2024 00:08:53.203 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:53.203 slat (nsec): min=6258, max=25494, avg=7285.85, stdev=850.12 00:08:53.203 clat (usec): min=155, max=403, avg=205.60, stdev=24.89 00:08:53.203 lat (usec): min=163, max=409, avg=212.89, stdev=24.91 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:08:53.203 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:08:53.203 | 70.00th=[ 208], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 255], 00:08:53.203 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 371], 00:08:53.203 | 99.99th=[ 404] 00:08:53.203 write: IOPS=2963, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:08:53.203 slat (nsec): min=9137, max=39210, avg=10466.22, stdev=1469.73 00:08:53.203 clat (usec): min=107, max=227, avg=138.84, stdev=15.02 00:08:53.203 lat (usec): min=116, max=266, avg=149.31, stdev=15.58 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:08:53.203 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:08:53.203 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:08:53.203 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 225], 99.95th=[ 229], 00:08:53.203 | 99.99th=[ 229] 00:08:53.203 bw ( KiB/s): min=12288, max=12288, per=48.73%, avg=12288.00, stdev= 0.00, samples=1 00:08:53.203 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:53.203 lat (usec) : 250=96.24%, 500=3.76% 00:08:53.203 cpu : usr=3.10%, sys=4.80%, ctx=5527, majf=0, minf=2 00:08:53.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 issued rwts: total=2560,2966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.203 job2: (groupid=0, jobs=1): err= 0: pid=3343794: Tue Nov 19 17:26:55 2024 00:08:53.203 read: IOPS=30, BW=122KiB/s (125kB/s)(124KiB/1016msec) 00:08:53.203 slat (nsec): min=8957, max=27853, avg=19763.32, stdev=6078.36 00:08:53.203 clat (usec): min=248, max=41353, avg=29170.94, stdev=18726.58 00:08:53.203 lat (usec): min=259, max=41364, avg=29190.70, stdev=18728.27 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 249], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 433], 00:08:53.203 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:08:53.203 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:53.203 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:53.203 | 99.99th=[41157] 00:08:53.203 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:08:53.203 slat (nsec): min=11742, max=36672, avg=15426.36, stdev=2454.57 00:08:53.203 clat (usec): min=144, max=295, avg=196.42, stdev=19.81 00:08:53.203 lat (usec): min=156, max=331, avg=211.85, stdev=20.65 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:08:53.203 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:08:53.203 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 231], 00:08:53.203 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 297], 00:08:53.203 | 99.99th=[ 297] 00:08:53.203 bw ( KiB/s): min= 4087, max= 4087, per=16.21%, avg=4087.00, stdev= 0.00, samples=1 00:08:53.203 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:08:53.203 lat (usec) : 250=93.74%, 500=2.21% 00:08:53.203 lat (msec) : 50=4.05% 00:08:53.203 cpu : usr=0.39%, sys=1.18%, ctx=547, majf=0, minf=1 00:08:53.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.203 job3: (groupid=0, jobs=1): err= 0: pid=3343799: Tue Nov 19 17:26:55 2024 00:08:53.203 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:08:53.203 slat (nsec): min=9718, max=23816, avg=22283.09, stdev=2760.61 00:08:53.203 clat (usec): min=40901, max=41954, avg=41064.73, stdev=270.94 00:08:53.203 lat (usec): min=40924, max=41977, avg=41087.01, stdev=270.95 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:53.203 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:53.203 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:08:53.203 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:53.203 | 99.99th=[42206] 00:08:53.203 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:08:53.203 slat (nsec): min=9343, max=41291, avg=10389.80, stdev=1732.77 00:08:53.203 clat (usec): min=137, max=252, avg=169.83, stdev=14.24 00:08:53.203 lat (usec): min=147, max=293, avg=180.22, stdev=14.79 00:08:53.203 clat percentiles (usec): 00:08:53.203 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:08:53.203 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:08:53.203 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:08:53.203 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 253], 99.95th=[ 253], 00:08:53.203 | 99.99th=[ 253] 00:08:53.203 bw ( KiB/s): min= 4087, max= 4087, per=16.21%, avg=4087.00, stdev= 0.00, samples=1 00:08:53.203 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:08:53.203 lat (usec) : 250=95.51%, 500=0.19% 00:08:53.203 lat (msec) : 50=4.30% 00:08:53.203 cpu : usr=0.39%, sys=0.29%, ctx=536, majf=0, minf=2 00:08:53.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.203 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.203 00:08:53.203 Run status group 0 (all jobs): 00:08:53.203 READ: bw=18.7MiB/s (19.7MB/s), 88.5KiB/s-9.99MiB/s (90.7kB/s-10.5MB/s), io=19.5MiB (20.4MB), run=1001-1039msec 00:08:53.203 WRITE: bw=24.6MiB/s (25.8MB/s), 1971KiB/s-11.6MiB/s (2018kB/s-12.1MB/s), io=25.6MiB (26.8MB), run=1001-1039msec 00:08:53.203 00:08:53.203 Disk stats (read/write): 00:08:53.203 nvme0n1: ios=2074/2110, merge=0/0, ticks=1410/326, in_queue=1736, util=98.00% 00:08:53.203 nvme0n2: ios=2221/2560, merge=0/0, ticks=432/353, in_queue=785, util=86.79% 00:08:53.203 nvme0n3: ios=52/512, merge=0/0, ticks=1724/96, in_queue=1820, util=98.33% 00:08:53.203 nvme0n4: ios=73/512, merge=0/0, ticks=885/84, in_queue=969, util=99.16% 00:08:53.203 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:53.203 [global] 00:08:53.203 thread=1 00:08:53.203 invalidate=1 00:08:53.203 rw=randwrite 00:08:53.203 time_based=1 00:08:53.203 runtime=1 00:08:53.203 ioengine=libaio 00:08:53.203 direct=1 00:08:53.203 bs=4096 00:08:53.203 iodepth=1 00:08:53.203 norandommap=0 00:08:53.203 numjobs=1 00:08:53.203 00:08:53.203 verify_dump=1 00:08:53.203 verify_backlog=512 00:08:53.203 verify_state_save=0 00:08:53.203 do_verify=1 00:08:53.203 verify=crc32c-intel 00:08:53.203 [job0] 00:08:53.203 filename=/dev/nvme0n1 00:08:53.203 [job1] 00:08:53.203 filename=/dev/nvme0n2 00:08:53.203 [job2] 00:08:53.203 filename=/dev/nvme0n3 00:08:53.203 [job3] 00:08:53.203 filename=/dev/nvme0n4 00:08:53.203 Could not set queue depth (nvme0n1) 00:08:53.203 Could not set queue depth (nvme0n2) 00:08:53.203 Could not set queue depth (nvme0n3) 00:08:53.203 Could not set queue depth (nvme0n4) 00:08:53.466 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.466 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.466 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.466 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.466 fio-3.35 00:08:53.466 Starting 4 threads 00:08:54.855 00:08:54.855 job0: (groupid=0, jobs=1): err= 0: pid=3344254: Tue Nov 19 17:26:56 2024 00:08:54.855 read: IOPS=1018, BW=4074KiB/s (4172kB/s)(4188KiB/1028msec) 00:08:54.855 slat (nsec): min=7025, max=24571, avg=8519.65, stdev=1788.19 00:08:54.855 clat (usec): min=190, max=41452, avg=711.82, stdev=4151.33 00:08:54.855 lat (usec): min=199, max=41461, avg=720.34, stdev=4151.95 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 239], 00:08:54.855 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:08:54.855 | 70.00th=[ 269], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 416], 00:08:54.855 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:08:54.855 | 99.99th=[41681] 00:08:54.855 write: IOPS=1494, BW=5977KiB/s (6120kB/s)(6144KiB/1028msec); 0 zone resets 00:08:54.855 slat (nsec): min=10295, max=37089, avg=11667.35, stdev=1930.70 00:08:54.855 clat (usec): min=115, max=3340, avg=161.16, stdev=86.02 00:08:54.855 lat (usec): min=126, max=3354, avg=172.82, stdev=86.17 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:08:54.855 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:08:54.855 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:08:54.855 | 99.00th=[ 202], 99.50th=[ 258], 99.90th=[ 906], 99.95th=[ 3326], 00:08:54.855 | 99.99th=[ 3326] 00:08:54.855 bw ( KiB/s): min= 3344, max= 8944, per=34.27%, avg=6144.00, stdev=3959.80, samples=2 00:08:54.855 iops : min= 836, max= 2236, avg=1536.00, stdev=989.95, samples=2 00:08:54.855 lat (usec) : 250=77.78%, 500=21.64%, 750=0.04%, 1000=0.04% 00:08:54.855 lat (msec) : 2=0.04%, 4=0.04%, 50=0.43% 00:08:54.855 cpu : usr=2.14%, sys=4.09%, ctx=2586, majf=0, minf=1 00:08:54.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 issued rwts: total=1047,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.855 job1: (groupid=0, jobs=1): err= 0: pid=3344271: Tue Nov 19 17:26:56 2024 00:08:54.855 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:08:54.855 slat (nsec): min=10420, max=22242, avg=19639.45, stdev=4028.62 00:08:54.855 clat (usec): min=40876, max=41212, avg=40989.17, stdev=78.69 00:08:54.855 lat (usec): min=40898, max=41223, avg=41008.81, stdev=77.19 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:54.855 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:54.855 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:54.855 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:54.855 | 99.99th=[41157] 00:08:54.855 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:08:54.855 slat (nsec): min=11409, max=39682, avg=12576.42, stdev=1923.57 00:08:54.855 clat (usec): min=153, max=336, avg=189.44, stdev=16.53 00:08:54.855 lat (usec): min=166, max=351, avg=202.01, stdev=16.89 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:08:54.855 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:08:54.855 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 212], 00:08:54.855 | 99.00th=[ 237], 99.50th=[ 258], 99.90th=[ 338], 99.95th=[ 338], 00:08:54.855 | 99.99th=[ 338] 00:08:54.855 bw ( KiB/s): min= 4096, max= 4096, per=22.84%, avg=4096.00, stdev= 0.00, samples=1 00:08:54.855 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:54.855 lat (usec) : 250=95.32%, 500=0.56% 00:08:54.855 lat (msec) : 50=4.12% 00:08:54.855 cpu : usr=0.50%, sys=0.99%, ctx=534, majf=0, minf=2 00:08:54.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.855 job2: (groupid=0, jobs=1): err= 0: pid=3344278: Tue Nov 19 17:26:56 2024 00:08:54.855 read: IOPS=627, BW=2511KiB/s (2572kB/s)(2564KiB/1021msec) 00:08:54.855 slat (nsec): min=6576, max=51926, avg=8529.13, stdev=3217.27 00:08:54.855 clat (usec): min=182, max=41198, avg=1271.12, stdev=6345.47 00:08:54.855 lat (usec): min=190, max=41209, avg=1279.65, stdev=6347.18 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 231], 00:08:54.855 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:08:54.855 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 269], 00:08:54.855 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:54.855 | 99.99th=[41157] 00:08:54.855 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:08:54.855 slat (nsec): min=9466, max=37979, avg=11344.55, stdev=2125.42 00:08:54.855 clat (usec): min=136, max=330, avg=179.68, stdev=19.68 00:08:54.855 lat (usec): min=146, max=359, avg=191.02, stdev=19.84 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:08:54.855 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:08:54.855 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:08:54.855 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 314], 99.95th=[ 330], 00:08:54.855 | 99.99th=[ 330] 00:08:54.855 bw ( KiB/s): min= 8192, max= 8192, per=45.69%, avg=8192.00, stdev= 0.00, samples=1 00:08:54.855 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:54.855 lat (usec) : 250=87.33%, 500=11.59% 00:08:54.855 lat (msec) : 10=0.12%, 50=0.96% 00:08:54.855 cpu : usr=0.69%, sys=2.25%, ctx=1667, majf=0, minf=1 00:08:54.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 issued rwts: total=641,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.855 job3: (groupid=0, jobs=1): err= 0: pid=3344279: Tue Nov 19 17:26:56 2024 00:08:54.855 read: IOPS=1027, BW=4111KiB/s (4210kB/s)(4140KiB/1007msec) 00:08:54.855 slat (nsec): min=7267, max=23578, avg=8686.29, stdev=1835.93 00:08:54.855 clat (usec): min=195, max=41064, avg=679.07, stdev=4178.70 00:08:54.855 lat (usec): min=203, max=41087, avg=687.76, stdev=4180.09 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:08:54.855 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:08:54.855 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:08:54.855 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:54.855 | 99.99th=[41157] 00:08:54.855 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:08:54.855 slat (nsec): min=10403, max=37381, avg=11895.92, stdev=1782.93 00:08:54.855 clat (usec): min=131, max=346, avg=174.69, stdev=19.62 00:08:54.855 lat (usec): min=142, max=384, avg=186.58, stdev=20.06 00:08:54.855 clat percentiles (usec): 00:08:54.855 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:08:54.855 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:08:54.855 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 208], 00:08:54.855 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 347], 99.95th=[ 347], 00:08:54.855 | 99.99th=[ 347] 00:08:54.855 bw ( KiB/s): min= 2456, max= 9832, per=34.27%, avg=6144.00, stdev=5215.62, samples=2 00:08:54.855 iops : min= 614, max= 2458, avg=1536.00, stdev=1303.90, samples=2 00:08:54.855 lat (usec) : 250=85.03%, 500=14.55% 00:08:54.855 lat (msec) : 50=0.43% 00:08:54.855 cpu : usr=2.29%, sys=4.08%, ctx=2572, majf=0, minf=1 00:08:54.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.855 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.855 00:08:54.855 Run status group 0 (all jobs): 00:08:54.855 READ: bw=10.4MiB/s (10.9MB/s), 87.3KiB/s-4111KiB/s (89.4kB/s-4210kB/s), io=10.7MiB (11.2MB), run=1007-1028msec 00:08:54.855 WRITE: bw=17.5MiB/s (18.4MB/s), 2032KiB/s-6101KiB/s (2081kB/s-6248kB/s), io=18.0MiB (18.9MB), run=1007-1028msec 00:08:54.855 00:08:54.855 Disk stats (read/write): 00:08:54.856 nvme0n1: ios=1069/1536, merge=0/0, ticks=1515/233, in_queue=1748, util=97.90% 00:08:54.856 nvme0n2: ios=18/512, merge=0/0, ticks=739/91, in_queue=830, util=86.79% 00:08:54.856 nvme0n3: ios=664/1024, merge=0/0, ticks=1589/181, in_queue=1770, util=98.02% 00:08:54.856 nvme0n4: ios=1065/1536, merge=0/0, ticks=1198/244, in_queue=1442, util=99.68% 00:08:54.856 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:54.856 [global] 00:08:54.856 thread=1 00:08:54.856 invalidate=1 00:08:54.856 rw=write 00:08:54.856 time_based=1 00:08:54.856 runtime=1 00:08:54.856 ioengine=libaio 00:08:54.856 direct=1 00:08:54.856 bs=4096 00:08:54.856 iodepth=128 00:08:54.856 norandommap=0 00:08:54.856 numjobs=1 00:08:54.856 00:08:54.856 verify_dump=1 00:08:54.856 verify_backlog=512 00:08:54.856 verify_state_save=0 00:08:54.856 do_verify=1 00:08:54.856 verify=crc32c-intel 00:08:54.856 [job0] 00:08:54.856 filename=/dev/nvme0n1 00:08:54.856 [job1] 00:08:54.856 filename=/dev/nvme0n2 00:08:54.856 [job2] 00:08:54.856 filename=/dev/nvme0n3 00:08:54.856 [job3] 00:08:54.856 filename=/dev/nvme0n4 00:08:54.856 Could not set queue depth (nvme0n1) 00:08:54.856 Could not set queue depth (nvme0n2) 00:08:54.856 Could not set queue depth (nvme0n3) 00:08:54.856 Could not set queue depth (nvme0n4) 00:08:54.856 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.856 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.856 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.856 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.856 fio-3.35 00:08:54.856 Starting 4 threads 00:08:56.258 00:08:56.258 job0: (groupid=0, jobs=1): err= 0: pid=3344664: Tue Nov 19 17:26:58 2024 00:08:56.258 read: IOPS=1595, BW=6381KiB/s (6534kB/s)(6400KiB/1003msec) 00:08:56.258 slat (nsec): min=1720, max=29807k, avg=196657.05, stdev=1528979.27 00:08:56.258 clat (usec): min=1463, max=85570, avg=21776.52, stdev=16027.87 00:08:56.258 lat (usec): min=3086, max=85599, avg=21973.18, stdev=16158.20 00:08:56.258 clat percentiles (usec): 00:08:56.258 | 1.00th=[ 7373], 5.00th=[10159], 10.00th=[10290], 20.00th=[12387], 00:08:56.258 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14353], 60.00th=[15926], 00:08:56.258 | 70.00th=[19530], 80.00th=[26870], 90.00th=[48497], 95.00th=[59507], 00:08:56.258 | 99.00th=[73925], 99.50th=[73925], 99.90th=[77071], 99.95th=[85459], 00:08:56.258 | 99.99th=[85459] 00:08:56.258 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:08:56.258 slat (usec): min=2, max=16635, avg=327.79, stdev=1442.76 00:08:56.258 clat (msec): min=5, max=140, avg=44.85, stdev=37.37 00:08:56.258 lat (msec): min=5, max=140, avg=45.18, stdev=37.64 00:08:56.258 clat percentiles (msec): 00:08:56.258 | 1.00th=[ 11], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 15], 00:08:56.258 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 31], 60.00th=[ 39], 00:08:56.258 | 70.00th=[ 55], 80.00th=[ 80], 90.00th=[ 111], 95.00th=[ 124], 00:08:56.258 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:08:56.258 | 99.99th=[ 142] 00:08:56.258 bw ( KiB/s): min= 6712, max= 9160, per=13.22%, avg=7936.00, stdev=1731.00, samples=2 00:08:56.258 iops : min= 1678, max= 2290, avg=1984.00, stdev=432.75, samples=2 00:08:56.258 lat (msec) : 2=0.03%, 4=0.03%, 10=2.17%, 20=51.34%, 50=24.53% 00:08:56.258 lat (msec) : 100=14.34%, 250=7.57% 00:08:56.258 cpu : usr=2.40%, sys=2.10%, ctx=239, majf=0, minf=1 00:08:56.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:08:56.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.258 issued rwts: total=1600,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.258 job1: (groupid=0, jobs=1): err= 0: pid=3344669: Tue Nov 19 17:26:58 2024 00:08:56.258 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:08:56.258 slat (nsec): min=1177, max=23582k, avg=126419.38, stdev=980460.19 00:08:56.258 clat (usec): min=2960, max=59331, avg=15398.23, stdev=9478.83 00:08:56.258 lat (usec): min=2966, max=59334, avg=15524.65, stdev=9557.31 00:08:56.258 clat percentiles (usec): 00:08:56.258 | 1.00th=[ 4555], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10028], 00:08:56.258 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11994], 60.00th=[13960], 00:08:56.258 | 70.00th=[16712], 80.00th=[18744], 90.00th=[21627], 95.00th=[27395], 00:08:56.258 | 99.00th=[55837], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:08:56.258 | 99.99th=[59507] 00:08:56.258 write: IOPS=3858, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1007msec); 0 zone resets 00:08:56.259 slat (usec): min=2, max=15494, avg=126.75, stdev=772.48 00:08:56.259 clat (usec): min=527, max=85466, avg=18684.68, stdev=14950.58 00:08:56.259 lat (usec): min=535, max=85469, avg=18811.43, stdev=15038.96 00:08:56.259 clat percentiles (usec): 00:08:56.259 | 1.00th=[ 3687], 5.00th=[ 5604], 10.00th=[ 6521], 20.00th=[ 8356], 00:08:56.259 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[11338], 60.00th=[15795], 00:08:56.259 | 70.00th=[20055], 80.00th=[30016], 90.00th=[43779], 95.00th=[53216], 00:08:56.259 | 99.00th=[60031], 99.50th=[61604], 99.90th=[78119], 99.95th=[78119], 00:08:56.259 | 99.99th=[85459] 00:08:56.259 bw ( KiB/s): min=13688, max=16384, per=25.06%, avg=15036.00, stdev=1906.36, samples=2 00:08:56.259 iops : min= 3422, max= 4096, avg=3759.00, stdev=476.59, samples=2 00:08:56.259 lat (usec) : 750=0.04%, 1000=0.13% 00:08:56.259 lat (msec) : 2=0.09%, 4=0.66%, 10=29.93%, 20=45.27%, 50=18.27% 00:08:56.259 lat (msec) : 100=5.60% 00:08:56.259 cpu : usr=2.98%, sys=3.78%, ctx=307, majf=0, minf=1 00:08:56.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:56.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.259 issued rwts: total=3584,3886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.259 job2: (groupid=0, jobs=1): err= 0: pid=3344672: Tue Nov 19 17:26:58 2024 00:08:56.259 read: IOPS=5370, BW=21.0MiB/s (22.0MB/s)(21.9MiB/1044msec) 00:08:56.259 slat (nsec): min=1114, max=14567k, avg=96762.18, stdev=706901.45 00:08:56.259 clat (usec): min=2810, max=59995, avg=13173.11, stdev=7807.67 00:08:56.259 lat (usec): min=2816, max=61621, avg=13269.87, stdev=7831.88 00:08:56.259 clat percentiles (usec): 00:08:56.259 | 1.00th=[ 4490], 5.00th=[ 6783], 10.00th=[ 8848], 20.00th=[ 9241], 00:08:56.259 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11600], 00:08:56.259 | 70.00th=[13173], 80.00th=[16712], 90.00th=[20579], 95.00th=[22938], 00:08:56.259 | 99.00th=[53216], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:08:56.259 | 99.99th=[60031] 00:08:56.259 write: IOPS=5394, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1044msec); 0 zone resets 00:08:56.259 slat (nsec): min=1905, max=11752k, avg=70555.93, stdev=421320.01 00:08:56.259 clat (usec): min=232, max=36598, avg=10394.06, stdev=4479.06 00:08:56.259 lat (usec): min=264, max=36602, avg=10464.61, stdev=4498.38 00:08:56.259 clat percentiles (usec): 00:08:56.259 | 1.00th=[ 1549], 5.00th=[ 4228], 10.00th=[ 5997], 20.00th=[ 8094], 00:08:56.259 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10159], 00:08:56.259 | 70.00th=[11731], 80.00th=[12387], 90.00th=[12780], 95.00th=[17695], 00:08:56.259 | 99.00th=[30016], 99.50th=[31589], 99.90th=[33817], 99.95th=[35390], 00:08:56.259 | 99.99th=[36439] 00:08:56.259 bw ( KiB/s): min=20824, max=24232, per=37.54%, avg=22528.00, stdev=2409.82, samples=2 00:08:56.259 iops : min= 5206, max= 6058, avg=5632.00, stdev=602.45, samples=2 00:08:56.259 lat (usec) : 250=0.01%, 500=0.02%, 750=0.12%, 1000=0.21% 00:08:56.259 lat (msec) : 2=0.34%, 4=2.02%, 10=47.21%, 20=42.63%, 50=6.41% 00:08:56.259 lat (msec) : 100=1.04% 00:08:56.259 cpu : usr=2.59%, sys=5.47%, ctx=603, majf=0, minf=1 00:08:56.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:56.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.259 issued rwts: total=5607,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.259 job3: (groupid=0, jobs=1): err= 0: pid=3344678: Tue Nov 19 17:26:58 2024 00:08:56.259 read: IOPS=3731, BW=14.6MiB/s (15.3MB/s)(15.2MiB/1044msec) 00:08:56.259 slat (nsec): min=1165, max=14981k, avg=109441.31, stdev=812432.78 00:08:56.259 clat (usec): min=4148, max=61019, avg=15701.87, stdev=9141.00 00:08:56.259 lat (usec): min=4155, max=69116, avg=15811.31, stdev=9196.54 00:08:56.259 clat percentiles (usec): 00:08:56.259 | 1.00th=[ 4424], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9765], 00:08:56.259 | 30.00th=[10552], 40.00th=[11600], 50.00th=[12518], 60.00th=[15270], 00:08:56.259 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23200], 95.00th=[29754], 00:08:56.259 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:08:56.259 | 99.99th=[61080] 00:08:56.259 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:08:56.259 slat (nsec): min=1970, max=14554k, avg=123576.87, stdev=834226.07 00:08:56.259 clat (usec): min=1306, max=98967, avg=17333.39, stdev=16650.22 00:08:56.259 lat (usec): min=1337, max=99061, avg=17456.96, stdev=16772.59 00:08:56.259 clat percentiles (usec): 00:08:56.259 | 1.00th=[ 4621], 5.00th=[ 7635], 10.00th=[ 9110], 20.00th=[ 9634], 00:08:56.259 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11731], 60.00th=[12256], 00:08:56.259 | 70.00th=[15139], 80.00th=[17695], 90.00th=[32900], 95.00th=[52691], 00:08:56.259 | 99.00th=[93848], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:08:56.259 | 99.99th=[99091] 00:08:56.259 bw ( KiB/s): min=15824, max=16944, per=27.30%, avg=16384.00, stdev=791.96, samples=2 00:08:56.259 iops : min= 3956, max= 4236, avg=4096.00, stdev=197.99, samples=2 00:08:56.259 lat (msec) : 2=0.03%, 10=24.19%, 20=58.46%, 50=12.83%, 100=4.50% 00:08:56.259 cpu : usr=2.11%, sys=4.60%, ctx=309, majf=0, minf=1 00:08:56.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:56.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.259 issued rwts: total=3896,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.259 00:08:56.259 Run status group 0 (all jobs): 00:08:56.259 READ: bw=55.0MiB/s (57.6MB/s), 6381KiB/s-21.0MiB/s (6534kB/s-22.0MB/s), io=57.4MiB (60.2MB), run=1003-1044msec 00:08:56.259 WRITE: bw=58.6MiB/s (61.4MB/s), 8167KiB/s-21.1MiB/s (8364kB/s-22.1MB/s), io=61.2MiB (64.2MB), run=1003-1044msec 00:08:56.259 00:08:56.259 Disk stats (read/write): 00:08:56.259 nvme0n1: ios=1076/1447, merge=0/0, ticks=14010/36260, in_queue=50270, util=97.19% 00:08:56.259 nvme0n2: ios=3072/3103, merge=0/0, ticks=47143/50247, in_queue=97390, util=82.92% 00:08:56.259 nvme0n3: ios=4608/4702, merge=0/0, ticks=43010/39151, in_queue=82161, util=87.42% 00:08:56.259 nvme0n4: ios=3129/3130, merge=0/0, ticks=35964/51520, in_queue=87484, util=97.46% 00:08:56.259 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:56.259 [global] 00:08:56.259 thread=1 00:08:56.259 invalidate=1 00:08:56.259 rw=randwrite 00:08:56.259 time_based=1 00:08:56.259 runtime=1 00:08:56.259 ioengine=libaio 00:08:56.259 direct=1 00:08:56.259 bs=4096 00:08:56.259 iodepth=128 00:08:56.259 norandommap=0 00:08:56.259 numjobs=1 00:08:56.259 00:08:56.259 verify_dump=1 00:08:56.259 verify_backlog=512 00:08:56.259 verify_state_save=0 00:08:56.259 do_verify=1 00:08:56.259 verify=crc32c-intel 00:08:56.259 [job0] 00:08:56.259 filename=/dev/nvme0n1 00:08:56.259 [job1] 00:08:56.259 filename=/dev/nvme0n2 00:08:56.259 [job2] 00:08:56.259 filename=/dev/nvme0n3 00:08:56.259 [job3] 00:08:56.259 filename=/dev/nvme0n4 00:08:56.259 Could not set queue depth (nvme0n1) 00:08:56.259 Could not set queue depth (nvme0n2) 00:08:56.259 Could not set queue depth (nvme0n3) 00:08:56.259 Could not set queue depth (nvme0n4) 00:08:56.519 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:56.519 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:56.519 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:56.519 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:56.519 fio-3.35 00:08:56.519 Starting 4 threads 00:08:57.895 00:08:57.895 job0: (groupid=0, jobs=1): err= 0: pid=3345044: Tue Nov 19 17:26:59 2024 00:08:57.895 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:08:57.895 slat (nsec): min=1050, max=11958k, avg=103334.89, stdev=720142.91 00:08:57.895 clat (usec): min=1209, max=58126, avg=14134.74, stdev=6712.92 00:08:57.895 lat (usec): min=1217, max=58129, avg=14238.07, stdev=6768.12 00:08:57.895 clat percentiles (usec): 00:08:57.895 | 1.00th=[ 2802], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 9241], 00:08:57.895 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11994], 60.00th=[13566], 00:08:57.896 | 70.00th=[16319], 80.00th=[20055], 90.00th=[22676], 95.00th=[25560], 00:08:57.896 | 99.00th=[32375], 99.50th=[52167], 99.90th=[57934], 99.95th=[57934], 00:08:57.896 | 99.99th=[57934] 00:08:57.896 write: IOPS=3983, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1008msec); 0 zone resets 00:08:57.896 slat (nsec): min=1839, max=21210k, avg=135947.60, stdev=833342.04 00:08:57.896 clat (usec): min=3464, max=63956, avg=19236.77, stdev=9003.08 00:08:57.896 lat (usec): min=3472, max=63984, avg=19372.72, stdev=9078.40 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 4752], 5.00th=[ 7504], 10.00th=[ 9503], 20.00th=[10290], 00:08:57.896 | 30.00th=[15008], 40.00th=[17433], 50.00th=[17695], 60.00th=[19530], 00:08:57.896 | 70.00th=[22676], 80.00th=[25560], 90.00th=[30540], 95.00th=[33817], 00:08:57.896 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[55313], 00:08:57.896 | 99.99th=[63701] 00:08:57.896 bw ( KiB/s): min=15288, max=15816, per=25.36%, avg=15552.00, stdev=373.35, samples=2 00:08:57.896 iops : min= 3822, max= 3954, avg=3888.00, stdev=93.34, samples=2 00:08:57.896 lat (msec) : 2=0.29%, 4=0.72%, 10=18.71%, 20=49.56%, 50=29.86% 00:08:57.896 lat (msec) : 100=0.86% 00:08:57.896 cpu : usr=2.58%, sys=4.27%, ctx=422, majf=0, minf=1 00:08:57.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:57.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.896 issued rwts: total=3584,4015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.896 job1: (groupid=0, jobs=1): err= 0: pid=3345045: Tue Nov 19 17:26:59 2024 00:08:57.896 read: IOPS=3046, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:08:57.896 slat (nsec): min=1531, max=45613k, avg=160806.53, stdev=1227354.97 00:08:57.896 clat (usec): min=5158, max=68468, avg=18570.58, stdev=8751.80 00:08:57.896 lat (usec): min=7214, max=68493, avg=18731.39, stdev=8857.32 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 8094], 5.00th=[10159], 10.00th=[10290], 20.00th=[10945], 00:08:57.896 | 30.00th=[13173], 40.00th=[15401], 50.00th=[16909], 60.00th=[19530], 00:08:57.896 | 70.00th=[21627], 80.00th=[23200], 90.00th=[26870], 95.00th=[38011], 00:08:57.896 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[64226], 00:08:57.896 | 99.99th=[68682] 00:08:57.896 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:08:57.896 slat (nsec): min=1904, max=22144k, avg=159636.77, stdev=979274.99 00:08:57.896 clat (usec): min=5082, max=74830, avg=22931.95, stdev=13916.17 00:08:57.896 lat (usec): min=5093, max=74835, avg=23091.59, stdev=13992.85 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[11994], 00:08:57.896 | 30.00th=[14484], 40.00th=[16581], 50.00th=[17433], 60.00th=[20579], 00:08:57.896 | 70.00th=[25822], 80.00th=[35390], 90.00th=[40109], 95.00th=[56886], 00:08:57.896 | 99.00th=[69731], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:08:57.896 | 99.99th=[74974] 00:08:57.896 bw ( KiB/s): min=12288, max=12288, per=20.04%, avg=12288.00, stdev= 0.00, samples=2 00:08:57.896 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:08:57.896 lat (msec) : 10=6.29%, 20=54.07%, 50=35.83%, 100=3.81% 00:08:57.896 cpu : usr=2.19%, sys=2.99%, ctx=285, majf=0, minf=2 00:08:57.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:08:57.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.896 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.896 job2: (groupid=0, jobs=1): err= 0: pid=3345046: Tue Nov 19 17:26:59 2024 00:08:57.896 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:08:57.896 slat (nsec): min=1409, max=8428.5k, avg=109599.10, stdev=676748.22 00:08:57.896 clat (usec): min=6109, max=47221, avg=13184.04, stdev=4036.47 00:08:57.896 lat (usec): min=6113, max=47232, avg=13293.64, stdev=4114.30 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 7177], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[11207], 00:08:57.896 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:08:57.896 | 70.00th=[14091], 80.00th=[14877], 90.00th=[17171], 95.00th=[21890], 00:08:57.896 | 99.00th=[26870], 99.50th=[39060], 99.90th=[46924], 99.95th=[47449], 00:08:57.896 | 99.99th=[47449] 00:08:57.896 write: IOPS=4443, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1008msec); 0 zone resets 00:08:57.896 slat (usec): min=2, max=19299, avg=117.17, stdev=608.80 00:08:57.896 clat (usec): min=3770, max=49515, avg=16463.56, stdev=8743.43 00:08:57.896 lat (usec): min=3773, max=49523, avg=16580.73, stdev=8788.59 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 5800], 5.00th=[ 8094], 10.00th=[ 9372], 20.00th=[ 9634], 00:08:57.896 | 30.00th=[11207], 40.00th=[12256], 50.00th=[14484], 60.00th=[16909], 00:08:57.896 | 70.00th=[17695], 80.00th=[19268], 90.00th=[29230], 95.00th=[36963], 00:08:57.896 | 99.00th=[47449], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:08:57.896 | 99.99th=[49546] 00:08:57.896 bw ( KiB/s): min=17096, max=17720, per=28.39%, avg=17408.00, stdev=441.23, samples=2 00:08:57.896 iops : min= 4274, max= 4430, avg=4352.00, stdev=110.31, samples=2 00:08:57.896 lat (msec) : 4=0.09%, 10=19.74%, 20=67.28%, 50=12.89% 00:08:57.896 cpu : usr=2.88%, sys=5.86%, ctx=535, majf=0, minf=1 00:08:57.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:57.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.896 issued rwts: total=4096,4479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.896 job3: (groupid=0, jobs=1): err= 0: pid=3345047: Tue Nov 19 17:26:59 2024 00:08:57.896 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:08:57.896 slat (nsec): min=1728, max=21121k, avg=127750.98, stdev=828750.37 00:08:57.896 clat (usec): min=7692, max=39544, avg=16383.35, stdev=5836.56 00:08:57.896 lat (usec): min=7700, max=39569, avg=16511.11, stdev=5900.96 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 8094], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:08:57.896 | 30.00th=[12780], 40.00th=[13173], 50.00th=[14484], 60.00th=[15664], 00:08:57.896 | 70.00th=[17171], 80.00th=[19792], 90.00th=[25035], 95.00th=[30278], 00:08:57.896 | 99.00th=[33424], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:08:57.896 | 99.99th=[39584] 00:08:57.896 write: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1006msec); 0 zone resets 00:08:57.896 slat (usec): min=2, max=16385, avg=130.40, stdev=712.81 00:08:57.896 clat (usec): min=672, max=47476, avg=17720.98, stdev=8153.63 00:08:57.896 lat (usec): min=5638, max=50258, avg=17851.37, stdev=8202.65 00:08:57.896 clat percentiles (usec): 00:08:57.896 | 1.00th=[ 6194], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11338], 00:08:57.896 | 30.00th=[12911], 40.00th=[13829], 50.00th=[16581], 60.00th=[17433], 00:08:57.896 | 70.00th=[18220], 80.00th=[21103], 90.00th=[29230], 95.00th=[38011], 00:08:57.896 | 99.00th=[43254], 99.50th=[45876], 99.90th=[47449], 99.95th=[47449], 00:08:57.896 | 99.99th=[47449] 00:08:57.896 bw ( KiB/s): min=13672, max=16384, per=24.51%, avg=15028.00, stdev=1917.67, samples=2 00:08:57.896 iops : min= 3418, max= 4096, avg=3757.00, stdev=479.42, samples=2 00:08:57.896 lat (usec) : 750=0.01% 00:08:57.896 lat (msec) : 10=5.50%, 20=72.67%, 50=21.81% 00:08:57.896 cpu : usr=3.28%, sys=5.67%, ctx=360, majf=0, minf=1 00:08:57.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:57.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.896 issued rwts: total=3584,3885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.896 00:08:57.896 Run status group 0 (all jobs): 00:08:57.896 READ: bw=55.5MiB/s (58.2MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.6MB/s), io=56.0MiB (58.7MB), run=1006-1008msec 00:08:57.896 WRITE: bw=59.9MiB/s (62.8MB/s), 11.9MiB/s-17.4MiB/s (12.5MB/s-18.2MB/s), io=60.4MiB (63.3MB), run=1006-1008msec 00:08:57.896 00:08:57.897 Disk stats (read/write): 00:08:57.897 nvme0n1: ios=2891/3055, merge=0/0, ticks=23981/39247, in_queue=63228, util=82.16% 00:08:57.897 nvme0n2: ios=2560/2781, merge=0/0, ticks=19558/23936, in_queue=43494, util=82.94% 00:08:57.897 nvme0n3: ios=3095/3582, merge=0/0, ticks=24400/35033, in_queue=59433, util=97.40% 00:08:57.897 nvme0n4: ios=2765/3072, merge=0/0, ticks=26764/32811, in_queue=59575, util=97.46% 00:08:57.897 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:57.897 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3345275 00:08:57.897 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:57.897 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:57.897 [global] 00:08:57.897 thread=1 00:08:57.897 invalidate=1 00:08:57.897 rw=read 00:08:57.897 time_based=1 00:08:57.897 runtime=10 00:08:57.897 ioengine=libaio 00:08:57.897 direct=1 00:08:57.897 bs=4096 00:08:57.897 iodepth=1 00:08:57.897 norandommap=1 00:08:57.897 numjobs=1 00:08:57.897 00:08:57.897 [job0] 00:08:57.897 filename=/dev/nvme0n1 00:08:57.897 [job1] 00:08:57.897 filename=/dev/nvme0n2 00:08:57.897 [job2] 00:08:57.897 filename=/dev/nvme0n3 00:08:57.897 [job3] 00:08:57.897 filename=/dev/nvme0n4 00:08:57.897 Could not set queue depth (nvme0n1) 00:08:57.897 Could not set queue depth (nvme0n2) 00:08:57.897 Could not set queue depth (nvme0n3) 00:08:57.897 Could not set queue depth (nvme0n4) 00:08:58.156 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.156 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.156 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.156 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.156 fio-3.35 00:08:58.156 Starting 4 threads 00:09:01.444 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:01.444 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41447424, buflen=4096 00:09:01.444 fio: pid=3345456, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:01.444 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:01.444 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45580288, buflen=4096 00:09:01.444 fio: pid=3345453, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:01.444 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:01.444 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:01.444 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=319488, buflen=4096 00:09:01.444 fio: pid=3345449, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:01.444 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:01.444 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:01.703 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10121216, buflen=4096 00:09:01.703 fio: pid=3345450, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:01.703 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:01.703 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:01.703 00:09:01.703 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3345449: Tue Nov 19 17:27:03 2024 00:09:01.703 read: IOPS=25, BW=99.2KiB/s (102kB/s)(312KiB/3144msec) 00:09:01.703 slat (usec): min=8, max=19856, avg=461.25, stdev=2764.42 00:09:01.703 clat (usec): min=292, max=43082, avg=39529.65, stdev=7900.06 00:09:01.703 lat (usec): min=315, max=61017, avg=39996.50, stdev=8463.69 00:09:01.703 clat percentiles (usec): 00:09:01.703 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:01.703 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.703 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:01.703 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:01.703 | 99.99th=[43254] 00:09:01.703 bw ( KiB/s): min= 93, max= 104, per=0.35%, avg=99.50, stdev= 5.05, samples=6 00:09:01.703 iops : min= 23, max= 26, avg=24.83, stdev= 1.33, samples=6 00:09:01.703 lat (usec) : 500=3.80% 00:09:01.703 lat (msec) : 50=94.94% 00:09:01.703 cpu : usr=0.00%, sys=0.13%, ctx=84, majf=0, minf=1 00:09:01.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.703 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3345450: Tue Nov 19 17:27:03 2024 00:09:01.703 read: IOPS=733, BW=2934KiB/s (3004kB/s)(9884KiB/3369msec) 00:09:01.703 slat (usec): min=6, max=17767, avg=22.15, stdev=450.04 00:09:01.703 clat (usec): min=170, max=44898, avg=1330.52, stdev=6552.64 00:09:01.703 lat (usec): min=177, max=59961, avg=1352.66, stdev=6613.57 00:09:01.703 clat percentiles (usec): 00:09:01.703 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 223], 00:09:01.703 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 255], 00:09:01.703 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 457], 00:09:01.703 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:01.703 | 99.99th=[44827] 00:09:01.703 bw ( KiB/s): min= 96, max= 9206, per=10.54%, avg=2979.67, stdev=4473.85, samples=6 00:09:01.703 iops : min= 24, max= 2301, avg=744.83, stdev=1118.32, samples=6 00:09:01.703 lat (usec) : 250=50.12%, 500=46.68%, 750=0.53% 00:09:01.703 lat (msec) : 50=2.63% 00:09:01.703 cpu : usr=0.15%, sys=0.68%, ctx=2476, majf=0, minf=2 00:09:01.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 issued rwts: total=2472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.703 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3345453: Tue Nov 19 17:27:03 2024 00:09:01.703 read: IOPS=3796, BW=14.8MiB/s (15.6MB/s)(43.5MiB/2931msec) 00:09:01.703 slat (usec): min=6, max=15687, avg= 9.91, stdev=182.88 00:09:01.703 clat (usec): min=170, max=40924, avg=250.54, stdev=543.82 00:09:01.703 lat (usec): min=178, max=40933, avg=260.45, stdev=574.02 00:09:01.703 clat percentiles (usec): 00:09:01.703 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 229], 00:09:01.703 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:09:01.703 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:09:01.703 | 99.00th=[ 297], 99.50th=[ 343], 99.90th=[ 433], 99.95th=[ 482], 00:09:01.703 | 99.99th=[40633] 00:09:01.703 bw ( KiB/s): min=14440, max=16448, per=54.80%, avg=15483.20, stdev=710.84, samples=5 00:09:01.703 iops : min= 3610, max= 4112, avg=3870.80, stdev=177.71, samples=5 00:09:01.703 lat (usec) : 250=62.68%, 500=37.27%, 750=0.02% 00:09:01.703 lat (msec) : 50=0.02% 00:09:01.703 cpu : usr=1.02%, sys=3.38%, ctx=11131, majf=0, minf=2 00:09:01.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 issued rwts: total=11129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.703 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3345456: Tue Nov 19 17:27:03 2024 00:09:01.703 read: IOPS=3712, BW=14.5MiB/s (15.2MB/s)(39.5MiB/2726msec) 00:09:01.703 slat (nsec): min=6514, max=40531, avg=8285.47, stdev=1481.73 00:09:01.703 clat (usec): min=194, max=2333, avg=256.59, stdev=66.25 00:09:01.703 lat (usec): min=202, max=2342, avg=264.88, stdev=66.27 00:09:01.703 clat percentiles (usec): 00:09:01.703 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:09:01.703 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:09:01.703 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 429], 00:09:01.703 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 529], 99.95th=[ 807], 00:09:01.703 | 99.99th=[ 1369] 00:09:01.703 bw ( KiB/s): min=12696, max=16424, per=54.39%, avg=15368.00, stdev=1523.44, samples=5 00:09:01.703 iops : min= 3174, max= 4106, avg=3842.00, stdev=380.86, samples=5 00:09:01.703 lat (usec) : 250=68.97%, 500=29.02%, 750=1.94%, 1000=0.01% 00:09:01.703 lat (msec) : 2=0.04%, 4=0.01% 00:09:01.703 cpu : usr=2.57%, sys=5.50%, ctx=10120, majf=0, minf=2 00:09:01.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.703 issued rwts: total=10120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.703 00:09:01.703 Run status group 0 (all jobs): 00:09:01.703 READ: bw=27.6MiB/s (28.9MB/s), 99.2KiB/s-14.8MiB/s (102kB/s-15.6MB/s), io=93.0MiB (97.5MB), run=2726-3369msec 00:09:01.703 00:09:01.703 Disk stats (read/write): 00:09:01.703 nvme0n1: ios=99/0, merge=0/0, ticks=3460/0, in_queue=3460, util=97.87% 00:09:01.703 nvme0n2: ios=2471/0, merge=0/0, ticks=3279/0, in_queue=3279, util=95.35% 00:09:01.703 nvme0n3: ios=10867/0, merge=0/0, ticks=2680/0, in_queue=2680, util=95.64% 00:09:01.703 nvme0n4: ios=9890/0, merge=0/0, ticks=2385/0, in_queue=2385, util=96.41% 00:09:01.962 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:01.962 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:02.222 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:02.222 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:02.222 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:02.222 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:02.482 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:02.482 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:02.741 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:02.741 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3345275 00:09:02.741 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:02.741 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.001 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.001 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:03.001 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:03.001 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.001 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:03.001 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.001 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:03.001 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:03.001 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:03.001 nvmf hotplug test: fio failed as expected 00:09:03.001 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.001 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:03.001 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.261 rmmod nvme_tcp 00:09:03.261 rmmod nvme_fabrics 00:09:03.261 rmmod nvme_keyring 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3342336 ']' 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3342336 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3342336 ']' 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3342336 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3342336 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3342336' 00:09:03.261 killing process with pid 3342336 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3342336 00:09:03.261 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3342336 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.521 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.429 00:09:05.429 real 0m27.111s 00:09:05.429 user 1m47.130s 00:09:05.429 sys 0m8.605s 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.429 ************************************ 00:09:05.429 END TEST nvmf_fio_target 00:09:05.429 ************************************ 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.429 17:27:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.688 ************************************ 00:09:05.688 START TEST nvmf_bdevio 00:09:05.688 ************************************ 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:05.688 * Looking for test storage... 00:09:05.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.688 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.689 --rc genhtml_branch_coverage=1 00:09:05.689 --rc genhtml_function_coverage=1 00:09:05.689 --rc genhtml_legend=1 00:09:05.689 --rc geninfo_all_blocks=1 00:09:05.689 --rc geninfo_unexecuted_blocks=1 00:09:05.689 00:09:05.689 ' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.689 --rc genhtml_branch_coverage=1 00:09:05.689 --rc genhtml_function_coverage=1 00:09:05.689 --rc genhtml_legend=1 00:09:05.689 --rc geninfo_all_blocks=1 00:09:05.689 --rc geninfo_unexecuted_blocks=1 00:09:05.689 00:09:05.689 ' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.689 --rc genhtml_branch_coverage=1 00:09:05.689 --rc genhtml_function_coverage=1 00:09:05.689 --rc genhtml_legend=1 00:09:05.689 --rc geninfo_all_blocks=1 00:09:05.689 --rc geninfo_unexecuted_blocks=1 00:09:05.689 00:09:05.689 ' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.689 --rc genhtml_branch_coverage=1 00:09:05.689 --rc genhtml_function_coverage=1 00:09:05.689 --rc genhtml_legend=1 00:09:05.689 --rc geninfo_all_blocks=1 00:09:05.689 --rc geninfo_unexecuted_blocks=1 00:09:05.689 00:09:05.689 ' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.689 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.262 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:12.263 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:12.263 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:12.263 Found net devices under 0000:86:00.0: cvl_0_0 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:12.263 Found net devices under 0000:86:00.1: cvl_0_1 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:09:12.263 00:09:12.263 --- 10.0.0.2 ping statistics --- 00:09:12.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.263 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:09:12.263 00:09:12.263 --- 10.0.0.1 ping statistics --- 00:09:12.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.263 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3350402 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3350402 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3350402 ']' 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.263 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.263 [2024-11-19 17:27:13.922580] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:09:12.263 [2024-11-19 17:27:13.922628] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.264 [2024-11-19 17:27:14.000932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.264 [2024-11-19 17:27:14.040636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.264 [2024-11-19 17:27:14.040674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.264 [2024-11-19 17:27:14.040684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.264 [2024-11-19 17:27:14.040691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.264 [2024-11-19 17:27:14.040696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.264 [2024-11-19 17:27:14.042153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.264 [2024-11-19 17:27:14.042181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.264 [2024-11-19 17:27:14.042290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.264 [2024-11-19 17:27:14.042291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.264 [2024-11-19 17:27:14.190978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.264 Malloc0 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.264 [2024-11-19 17:27:14.245471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.264 { 00:09:12.264 "params": { 00:09:12.264 "name": "Nvme$subsystem", 00:09:12.264 "trtype": "$TEST_TRANSPORT", 00:09:12.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.264 "adrfam": "ipv4", 00:09:12.264 "trsvcid": "$NVMF_PORT", 00:09:12.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.264 "hdgst": ${hdgst:-false}, 00:09:12.264 "ddgst": ${ddgst:-false} 00:09:12.264 }, 00:09:12.264 "method": "bdev_nvme_attach_controller" 00:09:12.264 } 00:09:12.264 EOF 00:09:12.264 )") 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:12.264 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.264 "params": { 00:09:12.264 "name": "Nvme1", 00:09:12.264 "trtype": "tcp", 00:09:12.264 "traddr": "10.0.0.2", 00:09:12.264 "adrfam": "ipv4", 00:09:12.264 "trsvcid": "4420", 00:09:12.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.264 "hdgst": false, 00:09:12.264 "ddgst": false 00:09:12.264 }, 00:09:12.264 "method": "bdev_nvme_attach_controller" 00:09:12.264 }' 00:09:12.264 [2024-11-19 17:27:14.293960] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:09:12.264 [2024-11-19 17:27:14.294000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350429 ] 00:09:12.264 [2024-11-19 17:27:14.370988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.264 [2024-11-19 17:27:14.414963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.264 [2024-11-19 17:27:14.415035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.264 [2024-11-19 17:27:14.415035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.523 I/O targets: 00:09:12.523 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:12.523 00:09:12.523 00:09:12.523 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.523 http://cunit.sourceforge.net/ 00:09:12.523 00:09:12.523 00:09:12.523 Suite: bdevio tests on: Nvme1n1 00:09:12.523 Test: blockdev write read block ...passed 00:09:12.523 Test: blockdev write zeroes read block ...passed 00:09:12.523 Test: blockdev write zeroes read no split ...passed 00:09:12.523 Test: blockdev write zeroes read split ...passed 00:09:12.783 Test: blockdev write zeroes read split partial ...passed 00:09:12.783 Test: blockdev reset ...[2024-11-19 17:27:14.762556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:12.783 [2024-11-19 17:27:14.762626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66340 (9): Bad file descriptor 00:09:12.783 [2024-11-19 17:27:14.817402] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:12.783 passed 00:09:12.783 Test: blockdev write read 8 blocks ...passed 00:09:12.783 Test: blockdev write read size > 128k ...passed 00:09:12.783 Test: blockdev write read invalid size ...passed 00:09:12.783 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:12.783 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:12.783 Test: blockdev write read max offset ...passed 00:09:12.783 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:12.783 Test: blockdev writev readv 8 blocks ...passed 00:09:12.783 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.042 Test: blockdev writev readv block ...passed 00:09:13.042 Test: blockdev writev readv size > 128k ...passed 00:09:13.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.042 Test: blockdev comparev and writev ...[2024-11-19 17:27:15.029773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.029809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.029824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.029832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.030077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.030088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.030100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.030107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.030345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.030355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.030367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.030374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.030610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.030620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.030632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.042 [2024-11-19 17:27:15.030639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:13.042 passed 00:09:13.042 Test: blockdev nvme passthru rw ...passed 00:09:13.042 Test: blockdev nvme passthru vendor specific ...[2024-11-19 17:27:15.112426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.042 [2024-11-19 17:27:15.112448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.112554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.042 [2024-11-19 17:27:15.112563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.112664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.042 [2024-11-19 17:27:15.112673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:13.042 [2024-11-19 17:27:15.112777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.042 [2024-11-19 17:27:15.112786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:13.042 passed 00:09:13.042 Test: blockdev nvme admin passthru ...passed 00:09:13.042 Test: blockdev copy ...passed 00:09:13.042 00:09:13.042 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.042 suites 1 1 n/a 0 0 00:09:13.042 tests 23 23 23 0 0 00:09:13.042 asserts 152 152 152 0 n/a 00:09:13.042 00:09:13.042 Elapsed time = 1.121 seconds 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.302 rmmod nvme_tcp 00:09:13.302 rmmod nvme_fabrics 00:09:13.302 rmmod nvme_keyring 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3350402 ']' 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3350402 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3350402 ']' 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3350402 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350402 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350402' 00:09:13.302 killing process with pid 3350402 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3350402 00:09:13.302 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3350402 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.561 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.099 00:09:16.099 real 0m10.022s 00:09:16.099 user 0m10.099s 00:09:16.099 sys 0m4.982s 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.099 ************************************ 00:09:16.099 END TEST nvmf_bdevio 00:09:16.099 ************************************ 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:16.099 00:09:16.099 real 4m36.181s 00:09:16.099 user 10m30.517s 00:09:16.099 sys 1m39.716s 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.099 ************************************ 00:09:16.099 END TEST nvmf_target_core 00:09:16.099 ************************************ 00:09:16.099 17:27:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:16.099 17:27:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.099 17:27:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.099 17:27:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.099 ************************************ 00:09:16.099 START TEST nvmf_target_extra 00:09:16.099 ************************************ 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:16.099 * Looking for test storage... 00:09:16.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.099 17:27:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.100 --rc genhtml_branch_coverage=1 00:09:16.100 --rc genhtml_function_coverage=1 00:09:16.100 --rc genhtml_legend=1 00:09:16.100 --rc geninfo_all_blocks=1 00:09:16.100 --rc geninfo_unexecuted_blocks=1 00:09:16.100 00:09:16.100 ' 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.100 --rc genhtml_branch_coverage=1 00:09:16.100 --rc genhtml_function_coverage=1 00:09:16.100 --rc genhtml_legend=1 00:09:16.100 --rc geninfo_all_blocks=1 00:09:16.100 --rc geninfo_unexecuted_blocks=1 00:09:16.100 00:09:16.100 ' 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.100 --rc genhtml_branch_coverage=1 00:09:16.100 --rc genhtml_function_coverage=1 00:09:16.100 --rc genhtml_legend=1 00:09:16.100 --rc geninfo_all_blocks=1 00:09:16.100 --rc geninfo_unexecuted_blocks=1 00:09:16.100 00:09:16.100 ' 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.100 --rc genhtml_branch_coverage=1 00:09:16.100 --rc genhtml_function_coverage=1 00:09:16.100 --rc genhtml_legend=1 00:09:16.100 --rc geninfo_all_blocks=1 00:09:16.100 --rc geninfo_unexecuted_blocks=1 00:09:16.100 00:09:16.100 ' 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.100 17:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:16.100 ************************************ 00:09:16.100 START TEST nvmf_example 00:09:16.100 ************************************ 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:16.100 * Looking for test storage... 00:09:16.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.100 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.101 --rc genhtml_branch_coverage=1 00:09:16.101 --rc genhtml_function_coverage=1 00:09:16.101 --rc genhtml_legend=1 00:09:16.101 --rc geninfo_all_blocks=1 00:09:16.101 --rc geninfo_unexecuted_blocks=1 00:09:16.101 00:09:16.101 ' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.101 --rc genhtml_branch_coverage=1 00:09:16.101 --rc genhtml_function_coverage=1 00:09:16.101 --rc genhtml_legend=1 00:09:16.101 --rc geninfo_all_blocks=1 00:09:16.101 --rc geninfo_unexecuted_blocks=1 00:09:16.101 00:09:16.101 ' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.101 --rc genhtml_branch_coverage=1 00:09:16.101 --rc genhtml_function_coverage=1 00:09:16.101 --rc genhtml_legend=1 00:09:16.101 --rc geninfo_all_blocks=1 00:09:16.101 --rc geninfo_unexecuted_blocks=1 00:09:16.101 00:09:16.101 ' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.101 --rc genhtml_branch_coverage=1 00:09:16.101 --rc genhtml_function_coverage=1 00:09:16.101 --rc genhtml_legend=1 00:09:16.101 --rc geninfo_all_blocks=1 00:09:16.101 --rc geninfo_unexecuted_blocks=1 00:09:16.101 00:09:16.101 ' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.101 17:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.674 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:22.675 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:22.675 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:22.675 Found net devices under 0000:86:00.0: cvl_0_0 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:22.675 Found net devices under 0000:86:00.1: cvl_0_1 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.675 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:09:22.675 00:09:22.675 --- 10.0.0.2 ping statistics --- 00:09:22.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.675 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:09:22.675 00:09:22.675 --- 10.0.0.1 ping statistics --- 00:09:22.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.675 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3354255 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3354255 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3354255 ']' 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.675 17:27:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:23.245 17:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:33.299 Initializing NVMe Controllers 00:09:33.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:33.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:33.299 Initialization complete. Launching workers. 00:09:33.299 ======================================================== 00:09:33.299 Latency(us) 00:09:33.299 Device Information : IOPS MiB/s Average min max 00:09:33.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17704.48 69.16 3614.30 705.15 15452.75 00:09:33.299 ======================================================== 00:09:33.299 Total : 17704.48 69.16 3614.30 705.15 15452.75 00:09:33.299 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.561 rmmod nvme_tcp 00:09:33.561 rmmod nvme_fabrics 00:09:33.561 rmmod nvme_keyring 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3354255 ']' 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3354255 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3354255 ']' 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3354255 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3354255 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3354255' 00:09:33.561 killing process with pid 3354255 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3354255 00:09:33.561 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3354255 00:09:33.821 nvmf threads initialize successfully 00:09:33.821 bdev subsystem init successfully 00:09:33.821 created a nvmf target service 00:09:33.821 create targets's poll groups done 00:09:33.821 all subsystems of target started 00:09:33.821 nvmf target is running 00:09:33.821 all subsystems of target stopped 00:09:33.821 destroy targets's poll groups done 00:09:33.821 destroyed the nvmf target service 00:09:33.821 bdev subsystem finish successfully 00:09:33.821 nvmf threads destroy successfully 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.821 17:27:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.729 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.729 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:35.729 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.729 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.729 00:09:35.729 real 0m19.873s 00:09:35.729 user 0m46.167s 00:09:35.729 sys 0m6.112s 00:09:35.729 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.729 17:27:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.729 ************************************ 00:09:35.729 END TEST nvmf_example 00:09:35.729 ************************************ 00:09:35.990 17:27:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:35.990 17:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.990 17:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.990 17:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:35.990 ************************************ 00:09:35.990 START TEST nvmf_filesystem 00:09:35.990 ************************************ 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:35.990 * Looking for test storage... 00:09:35.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.990 --rc genhtml_branch_coverage=1 00:09:35.990 --rc genhtml_function_coverage=1 00:09:35.990 --rc genhtml_legend=1 00:09:35.990 --rc geninfo_all_blocks=1 00:09:35.990 --rc geninfo_unexecuted_blocks=1 00:09:35.990 00:09:35.990 ' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.990 --rc genhtml_branch_coverage=1 00:09:35.990 --rc genhtml_function_coverage=1 00:09:35.990 --rc genhtml_legend=1 00:09:35.990 --rc geninfo_all_blocks=1 00:09:35.990 --rc geninfo_unexecuted_blocks=1 00:09:35.990 00:09:35.990 ' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.990 --rc genhtml_branch_coverage=1 00:09:35.990 --rc genhtml_function_coverage=1 00:09:35.990 --rc genhtml_legend=1 00:09:35.990 --rc geninfo_all_blocks=1 00:09:35.990 --rc geninfo_unexecuted_blocks=1 00:09:35.990 00:09:35.990 ' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.990 --rc genhtml_branch_coverage=1 00:09:35.990 --rc genhtml_function_coverage=1 00:09:35.990 --rc genhtml_legend=1 00:09:35.990 --rc geninfo_all_blocks=1 00:09:35.990 --rc geninfo_unexecuted_blocks=1 00:09:35.990 00:09:35.990 ' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:35.990 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:35.991 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:36.255 #define SPDK_CONFIG_H 00:09:36.255 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:36.255 #define SPDK_CONFIG_APPS 1 00:09:36.255 #define SPDK_CONFIG_ARCH native 00:09:36.255 #undef SPDK_CONFIG_ASAN 00:09:36.255 #undef SPDK_CONFIG_AVAHI 00:09:36.255 #undef SPDK_CONFIG_CET 00:09:36.255 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:36.255 #define SPDK_CONFIG_COVERAGE 1 00:09:36.255 #define SPDK_CONFIG_CROSS_PREFIX 00:09:36.255 #undef SPDK_CONFIG_CRYPTO 00:09:36.255 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:36.255 #undef SPDK_CONFIG_CUSTOMOCF 00:09:36.255 #undef SPDK_CONFIG_DAOS 00:09:36.255 #define SPDK_CONFIG_DAOS_DIR 00:09:36.255 #define SPDK_CONFIG_DEBUG 1 00:09:36.255 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:36.255 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:36.255 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:36.255 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:36.255 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:36.255 #undef SPDK_CONFIG_DPDK_UADK 00:09:36.255 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:36.255 #define SPDK_CONFIG_EXAMPLES 1 00:09:36.255 #undef SPDK_CONFIG_FC 00:09:36.255 #define SPDK_CONFIG_FC_PATH 00:09:36.255 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:36.255 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:36.255 #define SPDK_CONFIG_FSDEV 1 00:09:36.255 #undef SPDK_CONFIG_FUSE 00:09:36.255 #undef SPDK_CONFIG_FUZZER 00:09:36.255 #define SPDK_CONFIG_FUZZER_LIB 00:09:36.255 #undef SPDK_CONFIG_GOLANG 00:09:36.255 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:36.255 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:36.255 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:36.255 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:36.255 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:36.255 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:36.255 #undef SPDK_CONFIG_HAVE_LZ4 00:09:36.255 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:36.255 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:36.255 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:36.255 #define SPDK_CONFIG_IDXD 1 00:09:36.255 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:36.255 #undef SPDK_CONFIG_IPSEC_MB 00:09:36.255 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:36.255 #define SPDK_CONFIG_ISAL 1 00:09:36.255 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:36.255 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:36.255 #define SPDK_CONFIG_LIBDIR 00:09:36.255 #undef SPDK_CONFIG_LTO 00:09:36.255 #define SPDK_CONFIG_MAX_LCORES 128 00:09:36.255 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:36.255 #define SPDK_CONFIG_NVME_CUSE 1 00:09:36.255 #undef SPDK_CONFIG_OCF 00:09:36.255 #define SPDK_CONFIG_OCF_PATH 00:09:36.255 #define SPDK_CONFIG_OPENSSL_PATH 00:09:36.255 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:36.255 #define SPDK_CONFIG_PGO_DIR 00:09:36.255 #undef SPDK_CONFIG_PGO_USE 00:09:36.255 #define SPDK_CONFIG_PREFIX /usr/local 00:09:36.255 #undef SPDK_CONFIG_RAID5F 00:09:36.255 #undef SPDK_CONFIG_RBD 00:09:36.255 #define SPDK_CONFIG_RDMA 1 00:09:36.255 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:36.255 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:36.255 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:36.255 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:36.255 #define SPDK_CONFIG_SHARED 1 00:09:36.255 #undef SPDK_CONFIG_SMA 00:09:36.255 #define SPDK_CONFIG_TESTS 1 00:09:36.255 #undef SPDK_CONFIG_TSAN 00:09:36.255 #define SPDK_CONFIG_UBLK 1 00:09:36.255 #define SPDK_CONFIG_UBSAN 1 00:09:36.255 #undef SPDK_CONFIG_UNIT_TESTS 00:09:36.255 #undef SPDK_CONFIG_URING 00:09:36.255 #define SPDK_CONFIG_URING_PATH 00:09:36.255 #undef SPDK_CONFIG_URING_ZNS 00:09:36.255 #undef SPDK_CONFIG_USDT 00:09:36.255 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:36.255 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:36.255 #define SPDK_CONFIG_VFIO_USER 1 00:09:36.255 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:36.255 #define SPDK_CONFIG_VHOST 1 00:09:36.255 #define SPDK_CONFIG_VIRTIO 1 00:09:36.255 #undef SPDK_CONFIG_VTUNE 00:09:36.255 #define SPDK_CONFIG_VTUNE_DIR 00:09:36.255 #define SPDK_CONFIG_WERROR 1 00:09:36.255 #define SPDK_CONFIG_WPDK_DIR 00:09:36.255 #undef SPDK_CONFIG_XNVME 00:09:36.255 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:36.255 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@66 -- # TEST_TAG=N/A 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@69 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # uname -s 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # PM_OS=Linux 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO=() 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@75 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # SUDO[0]= 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # SUDO[1]='sudo -E' 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@80 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == FreeBSD ]] 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@83 -- # [[ Linux == Linux ]] 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@83 -- # [[ ............................... != QEMU ]] 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@83 -- # [[ ! -e /.dockerenv ]] 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@86 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@87 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@90 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:36.256 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:36.257 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3356661 ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3356661 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rb9qBE 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rb9qBE/tests/target /tmp/spdk.rb9qBE 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189156495360 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6807465984 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981575168 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=405504 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:36.258 * Looking for test storage... 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189156495360 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9022058496 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.258 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.259 --rc genhtml_branch_coverage=1 00:09:36.259 --rc genhtml_function_coverage=1 00:09:36.259 --rc genhtml_legend=1 00:09:36.259 --rc geninfo_all_blocks=1 00:09:36.259 --rc geninfo_unexecuted_blocks=1 00:09:36.259 00:09:36.259 ' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.259 --rc genhtml_branch_coverage=1 00:09:36.259 --rc genhtml_function_coverage=1 00:09:36.259 --rc genhtml_legend=1 00:09:36.259 --rc geninfo_all_blocks=1 00:09:36.259 --rc geninfo_unexecuted_blocks=1 00:09:36.259 00:09:36.259 ' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.259 --rc genhtml_branch_coverage=1 00:09:36.259 --rc genhtml_function_coverage=1 00:09:36.259 --rc genhtml_legend=1 00:09:36.259 --rc geninfo_all_blocks=1 00:09:36.259 --rc geninfo_unexecuted_blocks=1 00:09:36.259 00:09:36.259 ' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.259 --rc genhtml_branch_coverage=1 00:09:36.259 --rc genhtml_function_coverage=1 00:09:36.259 --rc genhtml_legend=1 00:09:36.259 --rc geninfo_all_blocks=1 00:09:36.259 --rc geninfo_unexecuted_blocks=1 00:09:36.259 00:09:36.259 ' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.259 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.260 17:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.836 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.837 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.837 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.837 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:09:42.837 00:09:42.837 --- 10.0.0.2 ping statistics --- 00:09:42.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.837 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:42.837 00:09:42.837 --- 10.0.0.1 ping statistics --- 00:09:42.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.837 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.837 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 ************************************ 00:09:42.838 START TEST nvmf_filesystem_no_in_capsule 00:09:42.838 ************************************ 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3359729 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3359729 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3359729 ']' 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 [2024-11-19 17:27:44.527510] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:09:42.838 [2024-11-19 17:27:44.527560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.838 [2024-11-19 17:27:44.611472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.838 [2024-11-19 17:27:44.654300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.838 [2024-11-19 17:27:44.654338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.838 [2024-11-19 17:27:44.654346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.838 [2024-11-19 17:27:44.654354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.838 [2024-11-19 17:27:44.654360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.838 [2024-11-19 17:27:44.655807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.838 [2024-11-19 17:27:44.655827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.838 [2024-11-19 17:27:44.655921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.838 [2024-11-19 17:27:44.655921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 [2024-11-19 17:27:44.801797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 [2024-11-19 17:27:44.957454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:42.838 { 00:09:42.838 "name": "Malloc1", 00:09:42.838 "aliases": [ 00:09:42.838 "51cd4c76-9c69-44f2-aa63-03a2c9221397" 00:09:42.838 ], 00:09:42.838 "product_name": "Malloc disk", 00:09:42.838 "block_size": 512, 00:09:42.838 "num_blocks": 1048576, 00:09:42.838 "uuid": "51cd4c76-9c69-44f2-aa63-03a2c9221397", 00:09:42.838 "assigned_rate_limits": { 00:09:42.838 "rw_ios_per_sec": 0, 00:09:42.838 "rw_mbytes_per_sec": 0, 00:09:42.838 "r_mbytes_per_sec": 0, 00:09:42.838 "w_mbytes_per_sec": 0 00:09:42.838 }, 00:09:42.838 "claimed": true, 00:09:42.838 "claim_type": "exclusive_write", 00:09:42.838 "zoned": false, 00:09:42.838 "supported_io_types": { 00:09:42.838 "read": true, 00:09:42.838 "write": true, 00:09:42.838 "unmap": true, 00:09:42.838 "flush": true, 00:09:42.838 "reset": true, 00:09:42.838 "nvme_admin": false, 00:09:42.838 "nvme_io": false, 00:09:42.838 "nvme_io_md": false, 00:09:42.838 "write_zeroes": true, 00:09:42.838 "zcopy": true, 00:09:42.838 "get_zone_info": false, 00:09:42.838 "zone_management": false, 00:09:42.838 "zone_append": false, 00:09:42.838 "compare": false, 00:09:42.838 "compare_and_write": false, 00:09:42.838 "abort": true, 00:09:42.838 "seek_hole": false, 00:09:42.838 "seek_data": false, 00:09:42.838 "copy": true, 00:09:42.838 "nvme_iov_md": false 00:09:42.838 }, 00:09:42.838 "memory_domains": [ 00:09:42.838 { 00:09:42.838 "dma_device_id": "system", 00:09:42.838 "dma_device_type": 1 00:09:42.838 }, 00:09:42.838 { 00:09:42.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.838 "dma_device_type": 2 00:09:42.838 } 00:09:42.839 ], 00:09:42.839 "driver_specific": {} 00:09:42.839 } 00:09:42.839 ]' 00:09:42.839 17:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:42.839 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:42.839 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:43.098 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:43.098 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:43.098 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:43.098 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:43.098 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.476 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.476 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:44.476 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.476 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:44.476 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:46.383 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:46.383 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:46.384 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:46.952 17:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.330 ************************************ 00:09:48.330 START TEST filesystem_ext4 00:09:48.330 ************************************ 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:48.330 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:48.330 mke2fs 1.47.0 (5-Feb-2023) 00:09:48.330 Discarding device blocks: 0/522240 done 00:09:48.330 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:48.330 Filesystem UUID: 56ef8ccd-c047-48a7-adc3-70c3c7dd261b 00:09:48.330 Superblock backups stored on blocks: 00:09:48.330 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:48.330 00:09:48.330 Allocating group tables: 0/64 done 00:09:48.330 Writing inode tables: 0/64 done 00:09:48.330 Creating journal (8192 blocks): done 00:09:49.835 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:09:49.835 00:09:49.835 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:49.835 17:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3359729 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.405 00:09:56.405 real 0m7.806s 00:09:56.405 user 0m0.034s 00:09:56.405 sys 0m0.067s 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.405 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:56.405 ************************************ 00:09:56.405 END TEST filesystem_ext4 00:09:56.405 ************************************ 00:09:56.405 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:56.405 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:56.405 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.406 ************************************ 00:09:56.406 START TEST filesystem_btrfs 00:09:56.406 ************************************ 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:56.406 btrfs-progs v6.8.1 00:09:56.406 See https://btrfs.readthedocs.io for more information. 00:09:56.406 00:09:56.406 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:56.406 NOTE: several default settings have changed in version 5.15, please make sure 00:09:56.406 this does not affect your deployments: 00:09:56.406 - DUP for metadata (-m dup) 00:09:56.406 - enabled no-holes (-O no-holes) 00:09:56.406 - enabled free-space-tree (-R free-space-tree) 00:09:56.406 00:09:56.406 Label: (null) 00:09:56.406 UUID: 982cec02-e95a-4f92-81d8-d62ee9e5b0a1 00:09:56.406 Node size: 16384 00:09:56.406 Sector size: 4096 (CPU page size: 4096) 00:09:56.406 Filesystem size: 510.00MiB 00:09:56.406 Block group profiles: 00:09:56.406 Data: single 8.00MiB 00:09:56.406 Metadata: DUP 32.00MiB 00:09:56.406 System: DUP 8.00MiB 00:09:56.406 SSD detected: yes 00:09:56.406 Zoned device: no 00:09:56.406 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:56.406 Checksum: crc32c 00:09:56.406 Number of devices: 1 00:09:56.406 Devices: 00:09:56.406 ID SIZE PATH 00:09:56.406 1 510.00MiB /dev/nvme0n1p1 00:09:56.406 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3359729 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.406 00:09:56.406 real 0m0.422s 00:09:56.406 user 0m0.023s 00:09:56.406 sys 0m0.118s 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:56.406 ************************************ 00:09:56.406 END TEST filesystem_btrfs 00:09:56.406 ************************************ 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.406 ************************************ 00:09:56.406 START TEST filesystem_xfs 00:09:56.406 ************************************ 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:56.406 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:56.665 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:56.665 = sectsz=512 attr=2, projid32bit=1 00:09:56.665 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:56.665 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:56.665 data = bsize=4096 blocks=130560, imaxpct=25 00:09:56.665 = sunit=0 swidth=0 blks 00:09:56.665 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:56.665 log =internal log bsize=4096 blocks=16384, version=2 00:09:56.665 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:56.665 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:57.232 Discarding blocks...Done. 00:09:57.232 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:57.232 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.766 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.766 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:59.766 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.766 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:00.025 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:00.025 17:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3359729 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:00.025 00:10:00.025 real 0m3.474s 00:10:00.025 user 0m0.031s 00:10:00.025 sys 0m0.069s 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:00.025 ************************************ 00:10:00.025 END TEST filesystem_xfs 00:10:00.025 ************************************ 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:00.025 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3359729 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3359729 ']' 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3359729 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3359729 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3359729' 00:10:00.285 killing process with pid 3359729 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3359729 00:10:00.285 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3359729 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:00.545 00:10:00.545 real 0m18.175s 00:10:00.545 user 1m11.503s 00:10:00.545 sys 0m1.433s 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 ************************************ 00:10:00.545 END TEST nvmf_filesystem_no_in_capsule 00:10:00.545 ************************************ 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 ************************************ 00:10:00.545 START TEST nvmf_filesystem_in_capsule 00:10:00.545 ************************************ 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3362979 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3362979 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3362979 ']' 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.545 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.805 [2024-11-19 17:28:02.772194] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:10:00.805 [2024-11-19 17:28:02.772237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.805 [2024-11-19 17:28:02.852040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.805 [2024-11-19 17:28:02.894976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.805 [2024-11-19 17:28:02.895013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.806 [2024-11-19 17:28:02.895021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.806 [2024-11-19 17:28:02.895026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.806 [2024-11-19 17:28:02.895032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.806 [2024-11-19 17:28:02.896539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.806 [2024-11-19 17:28:02.896652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.806 [2024-11-19 17:28:02.896759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.806 [2024-11-19 17:28:02.896760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.806 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.806 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:00.806 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.806 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.806 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 [2024-11-19 17:28:03.042622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 [2024-11-19 17:28:03.199759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:01.065 { 00:10:01.065 "name": "Malloc1", 00:10:01.065 "aliases": [ 00:10:01.065 "cf1eb178-019b-43d1-aafd-c75ec537014d" 00:10:01.065 ], 00:10:01.065 "product_name": "Malloc disk", 00:10:01.065 "block_size": 512, 00:10:01.065 "num_blocks": 1048576, 00:10:01.065 "uuid": "cf1eb178-019b-43d1-aafd-c75ec537014d", 00:10:01.065 "assigned_rate_limits": { 00:10:01.065 "rw_ios_per_sec": 0, 00:10:01.065 "rw_mbytes_per_sec": 0, 00:10:01.065 "r_mbytes_per_sec": 0, 00:10:01.065 "w_mbytes_per_sec": 0 00:10:01.065 }, 00:10:01.065 "claimed": true, 00:10:01.065 "claim_type": "exclusive_write", 00:10:01.065 "zoned": false, 00:10:01.065 "supported_io_types": { 00:10:01.065 "read": true, 00:10:01.065 "write": true, 00:10:01.065 "unmap": true, 00:10:01.065 "flush": true, 00:10:01.065 "reset": true, 00:10:01.065 "nvme_admin": false, 00:10:01.065 "nvme_io": false, 00:10:01.065 "nvme_io_md": false, 00:10:01.065 "write_zeroes": true, 00:10:01.065 "zcopy": true, 00:10:01.065 "get_zone_info": false, 00:10:01.065 "zone_management": false, 00:10:01.065 "zone_append": false, 00:10:01.065 "compare": false, 00:10:01.065 "compare_and_write": false, 00:10:01.065 "abort": true, 00:10:01.065 "seek_hole": false, 00:10:01.065 "seek_data": false, 00:10:01.065 "copy": true, 00:10:01.065 "nvme_iov_md": false 00:10:01.065 }, 00:10:01.065 "memory_domains": [ 00:10:01.065 { 00:10:01.065 "dma_device_id": "system", 00:10:01.065 "dma_device_type": 1 00:10:01.065 }, 00:10:01.065 { 00:10:01.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.066 "dma_device_type": 2 00:10:01.066 } 00:10:01.066 ], 00:10:01.066 "driver_specific": {} 00:10:01.066 } 00:10:01.066 ]' 00:10:01.066 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:01.066 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:01.066 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:01.325 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:01.325 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:01.325 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:01.325 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:01.325 17:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.262 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.262 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:02.262 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.262 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:02.262 17:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:04.800 17:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:05.369 17:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.306 ************************************ 00:10:06.306 START TEST filesystem_in_capsule_ext4 00:10:06.306 ************************************ 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:06.306 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:06.307 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:06.307 mke2fs 1.47.0 (5-Feb-2023) 00:10:06.566 Discarding device blocks: 0/522240 done 00:10:06.566 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:06.566 Filesystem UUID: 8eaae9b3-7f19-4776-b728-04a5326689bb 00:10:06.566 Superblock backups stored on blocks: 00:10:06.566 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:06.566 00:10:06.566 Allocating group tables: 0/64 done 00:10:06.566 Writing inode tables: 0/64 done 00:10:06.566 Creating journal (8192 blocks): done 00:10:06.566 Writing superblocks and filesystem accounting information: 0/64 done 00:10:06.566 00:10:06.566 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:06.566 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:13.141 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3362979 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:13.142 00:10:13.142 real 0m6.229s 00:10:13.142 user 0m0.038s 00:10:13.142 sys 0m0.062s 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:13.142 ************************************ 00:10:13.142 END TEST filesystem_in_capsule_ext4 00:10:13.142 ************************************ 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.142 ************************************ 00:10:13.142 START TEST filesystem_in_capsule_btrfs 00:10:13.142 ************************************ 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:13.142 17:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:13.142 btrfs-progs v6.8.1 00:10:13.142 See https://btrfs.readthedocs.io for more information. 00:10:13.142 00:10:13.142 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:13.142 NOTE: several default settings have changed in version 5.15, please make sure 00:10:13.142 this does not affect your deployments: 00:10:13.142 - DUP for metadata (-m dup) 00:10:13.142 - enabled no-holes (-O no-holes) 00:10:13.142 - enabled free-space-tree (-R free-space-tree) 00:10:13.142 00:10:13.142 Label: (null) 00:10:13.142 UUID: 63deb1b8-8821-4405-bbff-9caede14fab9 00:10:13.142 Node size: 16384 00:10:13.142 Sector size: 4096 (CPU page size: 4096) 00:10:13.142 Filesystem size: 510.00MiB 00:10:13.142 Block group profiles: 00:10:13.142 Data: single 8.00MiB 00:10:13.142 Metadata: DUP 32.00MiB 00:10:13.142 System: DUP 8.00MiB 00:10:13.142 SSD detected: yes 00:10:13.142 Zoned device: no 00:10:13.142 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:13.142 Checksum: crc32c 00:10:13.142 Number of devices: 1 00:10:13.142 Devices: 00:10:13.142 ID SIZE PATH 00:10:13.142 1 510.00MiB /dev/nvme0n1p1 00:10:13.142 00:10:13.142 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:13.142 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3362979 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:13.710 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:13.970 00:10:13.970 real 0m1.156s 00:10:13.970 user 0m0.030s 00:10:13.970 sys 0m0.109s 00:10:13.970 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.970 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:13.970 ************************************ 00:10:13.970 END TEST filesystem_in_capsule_btrfs 00:10:13.970 ************************************ 00:10:13.970 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:13.970 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:13.970 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.970 17:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.970 ************************************ 00:10:13.970 START TEST filesystem_in_capsule_xfs 00:10:13.970 ************************************ 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:13.970 17:28:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:13.970 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:13.970 = sectsz=512 attr=2, projid32bit=1 00:10:13.970 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:13.970 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:13.970 data = bsize=4096 blocks=130560, imaxpct=25 00:10:13.970 = sunit=0 swidth=0 blks 00:10:13.970 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:13.970 log =internal log bsize=4096 blocks=16384, version=2 00:10:13.970 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:13.970 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:14.907 Discarding blocks...Done. 00:10:14.907 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:14.907 17:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3362979 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.444 00:10:17.444 real 0m3.622s 00:10:17.444 user 0m0.024s 00:10:17.444 sys 0m0.075s 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.444 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:17.444 ************************************ 00:10:17.444 END TEST filesystem_in_capsule_xfs 00:10:17.444 ************************************ 00:10:17.702 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:17.960 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:17.960 17:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3362979 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3362979 ']' 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3362979 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3362979 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.960 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.961 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3362979' 00:10:17.961 killing process with pid 3362979 00:10:17.961 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3362979 00:10:17.961 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3362979 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:18.527 00:10:18.527 real 0m17.771s 00:10:18.527 user 1m9.940s 00:10:18.527 sys 0m1.417s 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.527 ************************************ 00:10:18.527 END TEST nvmf_filesystem_in_capsule 00:10:18.527 ************************************ 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.527 rmmod nvme_tcp 00:10:18.527 rmmod nvme_fabrics 00:10:18.527 rmmod nvme_keyring 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.527 17:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.434 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.434 00:10:20.434 real 0m44.645s 00:10:20.434 user 2m23.560s 00:10:20.434 sys 0m7.493s 00:10:20.434 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.434 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.434 ************************************ 00:10:20.434 END TEST nvmf_filesystem 00:10:20.434 ************************************ 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.694 ************************************ 00:10:20.694 START TEST nvmf_target_discovery 00:10:20.694 ************************************ 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:20.694 * Looking for test storage... 00:10:20.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.694 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.695 --rc genhtml_branch_coverage=1 00:10:20.695 --rc genhtml_function_coverage=1 00:10:20.695 --rc genhtml_legend=1 00:10:20.695 --rc geninfo_all_blocks=1 00:10:20.695 --rc geninfo_unexecuted_blocks=1 00:10:20.695 00:10:20.695 ' 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.695 --rc genhtml_branch_coverage=1 00:10:20.695 --rc genhtml_function_coverage=1 00:10:20.695 --rc genhtml_legend=1 00:10:20.695 --rc geninfo_all_blocks=1 00:10:20.695 --rc geninfo_unexecuted_blocks=1 00:10:20.695 00:10:20.695 ' 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.695 --rc genhtml_branch_coverage=1 00:10:20.695 --rc genhtml_function_coverage=1 00:10:20.695 --rc genhtml_legend=1 00:10:20.695 --rc geninfo_all_blocks=1 00:10:20.695 --rc geninfo_unexecuted_blocks=1 00:10:20.695 00:10:20.695 ' 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.695 --rc genhtml_branch_coverage=1 00:10:20.695 --rc genhtml_function_coverage=1 00:10:20.695 --rc genhtml_legend=1 00:10:20.695 --rc geninfo_all_blocks=1 00:10:20.695 --rc geninfo_unexecuted_blocks=1 00:10:20.695 00:10:20.695 ' 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.695 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.955 17:28:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:27.527 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:27.527 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:27.527 Found net devices under 0000:86:00.0: cvl_0_0 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:27.527 Found net devices under 0000:86:00.1: cvl_0_1 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.527 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:10:27.528 00:10:27.528 --- 10.0.0.2 ping statistics --- 00:10:27.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.528 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:10:27.528 00:10:27.528 --- 10.0.0.1 ping statistics --- 00:10:27.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.528 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3369655 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3369655 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3369655 ']' 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.528 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 [2024-11-19 17:28:28.976415] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:10:27.528 [2024-11-19 17:28:28.976471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.528 [2024-11-19 17:28:29.054621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.528 [2024-11-19 17:28:29.096187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.528 [2024-11-19 17:28:29.096228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.528 [2024-11-19 17:28:29.096236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.528 [2024-11-19 17:28:29.096242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.528 [2024-11-19 17:28:29.096247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.528 [2024-11-19 17:28:29.097849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.528 [2024-11-19 17:28:29.097973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.528 [2024-11-19 17:28:29.098042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.528 [2024-11-19 17:28:29.098043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 [2024-11-19 17:28:29.243527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 Null1 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.528 [2024-11-19 17:28:29.289012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.528 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 Null2 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 Null3 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 Null4 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:27.529 00:10:27.529 Discovery Log Number of Records 6, Generation counter 6 00:10:27.529 =====Discovery Log Entry 0====== 00:10:27.529 trtype: tcp 00:10:27.529 adrfam: ipv4 00:10:27.529 subtype: current discovery subsystem 00:10:27.529 treq: not required 00:10:27.529 portid: 0 00:10:27.529 trsvcid: 4420 00:10:27.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:27.529 traddr: 10.0.0.2 00:10:27.529 eflags: explicit discovery connections, duplicate discovery information 00:10:27.529 sectype: none 00:10:27.529 =====Discovery Log Entry 1====== 00:10:27.529 trtype: tcp 00:10:27.529 adrfam: ipv4 00:10:27.529 subtype: nvme subsystem 00:10:27.529 treq: not required 00:10:27.529 portid: 0 00:10:27.529 trsvcid: 4420 00:10:27.529 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:27.529 traddr: 10.0.0.2 00:10:27.529 eflags: none 00:10:27.529 sectype: none 00:10:27.529 =====Discovery Log Entry 2====== 00:10:27.529 trtype: tcp 00:10:27.529 adrfam: ipv4 00:10:27.529 subtype: nvme subsystem 00:10:27.529 treq: not required 00:10:27.529 portid: 0 00:10:27.529 trsvcid: 4420 00:10:27.529 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:27.529 traddr: 10.0.0.2 00:10:27.529 eflags: none 00:10:27.529 sectype: none 00:10:27.529 =====Discovery Log Entry 3====== 00:10:27.529 trtype: tcp 00:10:27.529 adrfam: ipv4 00:10:27.529 subtype: nvme subsystem 00:10:27.529 treq: not required 00:10:27.529 portid: 0 00:10:27.529 trsvcid: 4420 00:10:27.529 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:27.529 traddr: 10.0.0.2 00:10:27.529 eflags: none 00:10:27.529 sectype: none 00:10:27.529 =====Discovery Log Entry 4====== 00:10:27.529 trtype: tcp 00:10:27.529 adrfam: ipv4 00:10:27.529 subtype: nvme subsystem 00:10:27.529 treq: not required 00:10:27.529 portid: 0 00:10:27.529 trsvcid: 4420 00:10:27.529 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:27.529 traddr: 10.0.0.2 00:10:27.529 eflags: none 00:10:27.529 sectype: none 00:10:27.529 =====Discovery Log Entry 5====== 00:10:27.529 trtype: tcp 00:10:27.529 adrfam: ipv4 00:10:27.529 subtype: discovery subsystem referral 00:10:27.529 treq: not required 00:10:27.529 portid: 0 00:10:27.529 trsvcid: 4430 00:10:27.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:27.529 traddr: 10.0.0.2 00:10:27.529 eflags: none 00:10:27.529 sectype: none 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:27.529 Perform nvmf subsystem discovery via RPC 00:10:27.529 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 [ 00:10:27.530 { 00:10:27.530 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:27.530 "subtype": "Discovery", 00:10:27.530 "listen_addresses": [ 00:10:27.530 { 00:10:27.530 "trtype": "TCP", 00:10:27.530 "adrfam": "IPv4", 00:10:27.530 "traddr": "10.0.0.2", 00:10:27.530 "trsvcid": "4420" 00:10:27.530 } 00:10:27.530 ], 00:10:27.530 "allow_any_host": true, 00:10:27.530 "hosts": [] 00:10:27.530 }, 00:10:27.530 { 00:10:27.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.530 "subtype": "NVMe", 00:10:27.530 "listen_addresses": [ 00:10:27.530 { 00:10:27.530 "trtype": "TCP", 00:10:27.530 "adrfam": "IPv4", 00:10:27.530 "traddr": "10.0.0.2", 00:10:27.530 "trsvcid": "4420" 00:10:27.530 } 00:10:27.530 ], 00:10:27.530 "allow_any_host": true, 00:10:27.530 "hosts": [], 00:10:27.530 "serial_number": "SPDK00000000000001", 00:10:27.530 "model_number": "SPDK bdev Controller", 00:10:27.530 "max_namespaces": 32, 00:10:27.530 "min_cntlid": 1, 00:10:27.530 "max_cntlid": 65519, 00:10:27.530 "namespaces": [ 00:10:27.530 { 00:10:27.530 "nsid": 1, 00:10:27.530 "bdev_name": "Null1", 00:10:27.530 "name": "Null1", 00:10:27.530 "nguid": "BD385D8B70364566872C241FCD38E4C4", 00:10:27.530 "uuid": "bd385d8b-7036-4566-872c-241fcd38e4c4" 00:10:27.530 } 00:10:27.530 ] 00:10:27.530 }, 00:10:27.530 { 00:10:27.530 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:27.530 "subtype": "NVMe", 00:10:27.530 "listen_addresses": [ 00:10:27.530 { 00:10:27.530 "trtype": "TCP", 00:10:27.530 "adrfam": "IPv4", 00:10:27.530 "traddr": "10.0.0.2", 00:10:27.530 "trsvcid": "4420" 00:10:27.530 } 00:10:27.530 ], 00:10:27.530 "allow_any_host": true, 00:10:27.530 "hosts": [], 00:10:27.530 "serial_number": "SPDK00000000000002", 00:10:27.530 "model_number": "SPDK bdev Controller", 00:10:27.530 "max_namespaces": 32, 00:10:27.530 "min_cntlid": 1, 00:10:27.530 "max_cntlid": 65519, 00:10:27.530 "namespaces": [ 00:10:27.530 { 00:10:27.530 "nsid": 1, 00:10:27.530 "bdev_name": "Null2", 00:10:27.530 "name": "Null2", 00:10:27.530 "nguid": "FD8500323CA44FD19416F0785BBC08E2", 00:10:27.530 "uuid": "fd850032-3ca4-4fd1-9416-f0785bbc08e2" 00:10:27.530 } 00:10:27.530 ] 00:10:27.530 }, 00:10:27.530 { 00:10:27.530 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:27.530 "subtype": "NVMe", 00:10:27.530 "listen_addresses": [ 00:10:27.530 { 00:10:27.530 "trtype": "TCP", 00:10:27.530 "adrfam": "IPv4", 00:10:27.530 "traddr": "10.0.0.2", 00:10:27.530 "trsvcid": "4420" 00:10:27.530 } 00:10:27.530 ], 00:10:27.530 "allow_any_host": true, 00:10:27.530 "hosts": [], 00:10:27.530 "serial_number": "SPDK00000000000003", 00:10:27.530 "model_number": "SPDK bdev Controller", 00:10:27.530 "max_namespaces": 32, 00:10:27.530 "min_cntlid": 1, 00:10:27.530 "max_cntlid": 65519, 00:10:27.530 "namespaces": [ 00:10:27.530 { 00:10:27.530 "nsid": 1, 00:10:27.530 "bdev_name": "Null3", 00:10:27.530 "name": "Null3", 00:10:27.530 "nguid": "565E25EA6C3D414A975B789EE4813F6E", 00:10:27.530 "uuid": "565e25ea-6c3d-414a-975b-789ee4813f6e" 00:10:27.530 } 00:10:27.530 ] 00:10:27.530 }, 00:10:27.530 { 00:10:27.530 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:27.530 "subtype": "NVMe", 00:10:27.530 "listen_addresses": [ 00:10:27.530 { 00:10:27.530 "trtype": "TCP", 00:10:27.530 "adrfam": "IPv4", 00:10:27.530 "traddr": "10.0.0.2", 00:10:27.530 "trsvcid": "4420" 00:10:27.530 } 00:10:27.530 ], 00:10:27.530 "allow_any_host": true, 00:10:27.530 "hosts": [], 00:10:27.530 "serial_number": "SPDK00000000000004", 00:10:27.530 "model_number": "SPDK bdev Controller", 00:10:27.530 "max_namespaces": 32, 00:10:27.530 "min_cntlid": 1, 00:10:27.530 "max_cntlid": 65519, 00:10:27.530 "namespaces": [ 00:10:27.530 { 00:10:27.530 "nsid": 1, 00:10:27.530 "bdev_name": "Null4", 00:10:27.530 "name": "Null4", 00:10:27.530 "nguid": "E25614ECBD114A808F6C07A7B11030DC", 00:10:27.530 "uuid": "e25614ec-bd11-4a80-8f6c-07a7b11030dc" 00:10:27.530 } 00:10:27.530 ] 00:10:27.530 } 00:10:27.530 ] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.530 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.790 rmmod nvme_tcp 00:10:27.790 rmmod nvme_fabrics 00:10:27.790 rmmod nvme_keyring 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3369655 ']' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3369655 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3369655 ']' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3369655 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369655 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369655' 00:10:27.790 killing process with pid 3369655 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3369655 00:10:27.790 17:28:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3369655 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.049 17:28:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.956 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.956 00:10:29.956 real 0m9.414s 00:10:29.956 user 0m5.748s 00:10:29.956 sys 0m4.860s 00:10:29.956 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.956 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.956 ************************************ 00:10:29.956 END TEST nvmf_target_discovery 00:10:29.956 ************************************ 00:10:29.957 17:28:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:29.957 17:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.957 17:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.957 17:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.216 ************************************ 00:10:30.216 START TEST nvmf_referrals 00:10:30.216 ************************************ 00:10:30.216 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:30.216 * Looking for test storage... 00:10:30.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.216 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.216 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.216 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.216 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.216 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.217 --rc genhtml_branch_coverage=1 00:10:30.217 --rc genhtml_function_coverage=1 00:10:30.217 --rc genhtml_legend=1 00:10:30.217 --rc geninfo_all_blocks=1 00:10:30.217 --rc geninfo_unexecuted_blocks=1 00:10:30.217 00:10:30.217 ' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.217 --rc genhtml_branch_coverage=1 00:10:30.217 --rc genhtml_function_coverage=1 00:10:30.217 --rc genhtml_legend=1 00:10:30.217 --rc geninfo_all_blocks=1 00:10:30.217 --rc geninfo_unexecuted_blocks=1 00:10:30.217 00:10:30.217 ' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.217 --rc genhtml_branch_coverage=1 00:10:30.217 --rc genhtml_function_coverage=1 00:10:30.217 --rc genhtml_legend=1 00:10:30.217 --rc geninfo_all_blocks=1 00:10:30.217 --rc geninfo_unexecuted_blocks=1 00:10:30.217 00:10:30.217 ' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.217 --rc genhtml_branch_coverage=1 00:10:30.217 --rc genhtml_function_coverage=1 00:10:30.217 --rc genhtml_legend=1 00:10:30.217 --rc geninfo_all_blocks=1 00:10:30.217 --rc geninfo_unexecuted_blocks=1 00:10:30.217 00:10:30.217 ' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.217 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.218 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.477 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.477 17:28:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.190 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:37.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:37.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:37.191 Found net devices under 0000:86:00.0: cvl_0_0 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:37.191 Found net devices under 0000:86:00.1: cvl_0_1 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:10:37.191 00:10:37.191 --- 10.0.0.2 ping statistics --- 00:10:37.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.191 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:37.191 00:10:37.191 --- 10.0.0.1 ping statistics --- 00:10:37.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.191 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3373437 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3373437 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3373437 ']' 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.191 [2024-11-19 17:28:38.413504] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:10:37.191 [2024-11-19 17:28:38.413552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.191 [2024-11-19 17:28:38.495184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.191 [2024-11-19 17:28:38.537550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.191 [2024-11-19 17:28:38.537586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.191 [2024-11-19 17:28:38.537593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.191 [2024-11-19 17:28:38.537599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.191 [2024-11-19 17:28:38.537604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.191 [2024-11-19 17:28:38.539171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.191 [2024-11-19 17:28:38.539281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.191 [2024-11-19 17:28:38.539389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.191 [2024-11-19 17:28:38.539390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.191 [2024-11-19 17:28:38.675737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.191 [2024-11-19 17:28:38.689153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.191 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.192 17:28:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.192 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.465 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:37.724 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:37.725 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.725 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.725 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.725 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.725 17:28:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.983 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:37.983 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:37.983 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:37.983 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:37.983 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:37.984 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.984 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.243 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:38.502 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.762 rmmod nvme_tcp 00:10:38.762 rmmod nvme_fabrics 00:10:38.762 rmmod nvme_keyring 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3373437 ']' 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3373437 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3373437 ']' 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3373437 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3373437 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3373437' 00:10:38.762 killing process with pid 3373437 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3373437 00:10:38.762 17:28:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3373437 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.022 17:28:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.927 00:10:40.927 real 0m10.880s 00:10:40.927 user 0m12.338s 00:10:40.927 sys 0m5.289s 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.927 ************************************ 00:10:40.927 END TEST nvmf_referrals 00:10:40.927 ************************************ 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.927 17:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.187 ************************************ 00:10:41.187 START TEST nvmf_connect_disconnect 00:10:41.187 ************************************ 00:10:41.187 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:41.187 * Looking for test storage... 00:10:41.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.187 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.188 --rc genhtml_branch_coverage=1 00:10:41.188 --rc genhtml_function_coverage=1 00:10:41.188 --rc genhtml_legend=1 00:10:41.188 --rc geninfo_all_blocks=1 00:10:41.188 --rc geninfo_unexecuted_blocks=1 00:10:41.188 00:10:41.188 ' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.188 --rc genhtml_branch_coverage=1 00:10:41.188 --rc genhtml_function_coverage=1 00:10:41.188 --rc genhtml_legend=1 00:10:41.188 --rc geninfo_all_blocks=1 00:10:41.188 --rc geninfo_unexecuted_blocks=1 00:10:41.188 00:10:41.188 ' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.188 --rc genhtml_branch_coverage=1 00:10:41.188 --rc genhtml_function_coverage=1 00:10:41.188 --rc genhtml_legend=1 00:10:41.188 --rc geninfo_all_blocks=1 00:10:41.188 --rc geninfo_unexecuted_blocks=1 00:10:41.188 00:10:41.188 ' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.188 --rc genhtml_branch_coverage=1 00:10:41.188 --rc genhtml_function_coverage=1 00:10:41.188 --rc genhtml_legend=1 00:10:41.188 --rc geninfo_all_blocks=1 00:10:41.188 --rc geninfo_unexecuted_blocks=1 00:10:41.188 00:10:41.188 ' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.188 17:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.761 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.762 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.762 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.762 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:10:47.762 00:10:47.762 --- 10.0.0.2 ping statistics --- 00:10:47.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.762 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:10:47.762 00:10:47.762 --- 10.0.0.1 ping statistics --- 00:10:47.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.762 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3377363 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3377363 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3377363 ']' 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.762 [2024-11-19 17:28:49.454770] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:10:47.762 [2024-11-19 17:28:49.454819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.762 [2024-11-19 17:28:49.534824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.762 [2024-11-19 17:28:49.579033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.762 [2024-11-19 17:28:49.579071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.762 [2024-11-19 17:28:49.579080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.762 [2024-11-19 17:28:49.579090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.762 [2024-11-19 17:28:49.579096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.762 [2024-11-19 17:28:49.580644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.762 [2024-11-19 17:28:49.580752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.762 [2024-11-19 17:28:49.580782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.762 [2024-11-19 17:28:49.580783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.762 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.763 [2024-11-19 17:28:49.726619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.763 [2024-11-19 17:28:49.803288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:47.763 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:51.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.221 rmmod nvme_tcp 00:11:04.221 rmmod nvme_fabrics 00:11:04.221 rmmod nvme_keyring 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3377363 ']' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3377363 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3377363 ']' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3377363 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3377363 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3377363' 00:11:04.221 killing process with pid 3377363 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3377363 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3377363 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.221 17:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.760 00:11:06.760 real 0m25.294s 00:11:06.760 user 1m8.469s 00:11:06.760 sys 0m5.845s 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:06.760 ************************************ 00:11:06.760 END TEST nvmf_connect_disconnect 00:11:06.760 ************************************ 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.760 ************************************ 00:11:06.760 START TEST nvmf_multitarget 00:11:06.760 ************************************ 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:06.760 * Looking for test storage... 00:11:06.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.760 --rc genhtml_branch_coverage=1 00:11:06.760 --rc genhtml_function_coverage=1 00:11:06.760 --rc genhtml_legend=1 00:11:06.760 --rc geninfo_all_blocks=1 00:11:06.760 --rc geninfo_unexecuted_blocks=1 00:11:06.760 00:11:06.760 ' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.760 --rc genhtml_branch_coverage=1 00:11:06.760 --rc genhtml_function_coverage=1 00:11:06.760 --rc genhtml_legend=1 00:11:06.760 --rc geninfo_all_blocks=1 00:11:06.760 --rc geninfo_unexecuted_blocks=1 00:11:06.760 00:11:06.760 ' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.760 --rc genhtml_branch_coverage=1 00:11:06.760 --rc genhtml_function_coverage=1 00:11:06.760 --rc genhtml_legend=1 00:11:06.760 --rc geninfo_all_blocks=1 00:11:06.760 --rc geninfo_unexecuted_blocks=1 00:11:06.760 00:11:06.760 ' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.760 --rc genhtml_branch_coverage=1 00:11:06.760 --rc genhtml_function_coverage=1 00:11:06.760 --rc genhtml_legend=1 00:11:06.760 --rc geninfo_all_blocks=1 00:11:06.760 --rc geninfo_unexecuted_blocks=1 00:11:06.760 00:11:06.760 ' 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.760 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.761 17:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:13.336 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.336 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.336 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.336 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.336 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:13.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:13.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:13.337 Found net devices under 0000:86:00.0: cvl_0_0 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:13.337 Found net devices under 0000:86:00.1: cvl_0_1 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:11:13.337 00:11:13.337 --- 10.0.0.2 ping statistics --- 00:11:13.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.337 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:13.337 00:11:13.337 --- 10.0.0.1 ping statistics --- 00:11:13.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.337 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:13.337 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3383697 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3383697 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3383697 ']' 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.338 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:13.338 [2024-11-19 17:29:14.786716] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:11:13.338 [2024-11-19 17:29:14.786764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.338 [2024-11-19 17:29:14.865761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.338 [2024-11-19 17:29:14.909173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.338 [2024-11-19 17:29:14.909211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.338 [2024-11-19 17:29:14.909218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.338 [2024-11-19 17:29:14.909224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.338 [2024-11-19 17:29:14.909229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.338 [2024-11-19 17:29:14.910869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.338 [2024-11-19 17:29:14.910992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.338 [2024-11-19 17:29:14.911039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.338 [2024-11-19 17:29:14.911039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:13.338 "nvmf_tgt_1" 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:13.338 "nvmf_tgt_2" 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:13.338 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:13.598 true 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:13.598 true 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.598 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.857 rmmod nvme_tcp 00:11:13.857 rmmod nvme_fabrics 00:11:13.857 rmmod nvme_keyring 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3383697 ']' 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3383697 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3383697 ']' 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3383697 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383697 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383697' 00:11:13.857 killing process with pid 3383697 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3383697 00:11:13.857 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3383697 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.117 17:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.025 00:11:16.025 real 0m9.626s 00:11:16.025 user 0m7.211s 00:11:16.025 sys 0m4.930s 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:16.025 ************************************ 00:11:16.025 END TEST nvmf_multitarget 00:11:16.025 ************************************ 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.025 ************************************ 00:11:16.025 START TEST nvmf_rpc 00:11:16.025 ************************************ 00:11:16.025 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:16.285 * Looking for test storage... 00:11:16.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.285 --rc genhtml_branch_coverage=1 00:11:16.285 --rc genhtml_function_coverage=1 00:11:16.285 --rc genhtml_legend=1 00:11:16.285 --rc geninfo_all_blocks=1 00:11:16.285 --rc geninfo_unexecuted_blocks=1 00:11:16.285 00:11:16.285 ' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.285 --rc genhtml_branch_coverage=1 00:11:16.285 --rc genhtml_function_coverage=1 00:11:16.285 --rc genhtml_legend=1 00:11:16.285 --rc geninfo_all_blocks=1 00:11:16.285 --rc geninfo_unexecuted_blocks=1 00:11:16.285 00:11:16.285 ' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.285 --rc genhtml_branch_coverage=1 00:11:16.285 --rc genhtml_function_coverage=1 00:11:16.285 --rc genhtml_legend=1 00:11:16.285 --rc geninfo_all_blocks=1 00:11:16.285 --rc geninfo_unexecuted_blocks=1 00:11:16.285 00:11:16.285 ' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.285 --rc genhtml_branch_coverage=1 00:11:16.285 --rc genhtml_function_coverage=1 00:11:16.285 --rc genhtml_legend=1 00:11:16.285 --rc geninfo_all_blocks=1 00:11:16.285 --rc geninfo_unexecuted_blocks=1 00:11:16.285 00:11:16.285 ' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.285 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.286 17:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:22.860 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:22.860 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:22.860 Found net devices under 0000:86:00.0: cvl_0_0 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.860 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:22.861 Found net devices under 0000:86:00.1: cvl_0_1 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:11:22.861 00:11:22.861 --- 10.0.0.2 ping statistics --- 00:11:22.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.861 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:11:22.861 00:11:22.861 --- 10.0.0.1 ping statistics --- 00:11:22.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.861 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3387477 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3387477 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3387477 ']' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.861 [2024-11-19 17:29:24.455009] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:11:22.861 [2024-11-19 17:29:24.455062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.861 [2024-11-19 17:29:24.535234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.861 [2024-11-19 17:29:24.578128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.861 [2024-11-19 17:29:24.578164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.861 [2024-11-19 17:29:24.578172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.861 [2024-11-19 17:29:24.578178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.861 [2024-11-19 17:29:24.578183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.861 [2024-11-19 17:29:24.579785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.861 [2024-11-19 17:29:24.579893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.861 [2024-11-19 17:29:24.580000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.861 [2024-11-19 17:29:24.580001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:22.861 "tick_rate": 2300000000, 00:11:22.861 "poll_groups": [ 00:11:22.861 { 00:11:22.861 "name": "nvmf_tgt_poll_group_000", 00:11:22.861 "admin_qpairs": 0, 00:11:22.861 "io_qpairs": 0, 00:11:22.861 "current_admin_qpairs": 0, 00:11:22.861 "current_io_qpairs": 0, 00:11:22.861 "pending_bdev_io": 0, 00:11:22.861 "completed_nvme_io": 0, 00:11:22.861 "transports": [] 00:11:22.861 }, 00:11:22.861 { 00:11:22.861 "name": "nvmf_tgt_poll_group_001", 00:11:22.861 "admin_qpairs": 0, 00:11:22.861 "io_qpairs": 0, 00:11:22.861 "current_admin_qpairs": 0, 00:11:22.861 "current_io_qpairs": 0, 00:11:22.861 "pending_bdev_io": 0, 00:11:22.861 "completed_nvme_io": 0, 00:11:22.861 "transports": [] 00:11:22.861 }, 00:11:22.861 { 00:11:22.861 "name": "nvmf_tgt_poll_group_002", 00:11:22.861 "admin_qpairs": 0, 00:11:22.861 "io_qpairs": 0, 00:11:22.861 "current_admin_qpairs": 0, 00:11:22.861 "current_io_qpairs": 0, 00:11:22.861 "pending_bdev_io": 0, 00:11:22.861 "completed_nvme_io": 0, 00:11:22.861 "transports": [] 00:11:22.861 }, 00:11:22.861 { 00:11:22.861 "name": "nvmf_tgt_poll_group_003", 00:11:22.861 "admin_qpairs": 0, 00:11:22.861 "io_qpairs": 0, 00:11:22.861 "current_admin_qpairs": 0, 00:11:22.861 "current_io_qpairs": 0, 00:11:22.861 "pending_bdev_io": 0, 00:11:22.861 "completed_nvme_io": 0, 00:11:22.861 "transports": [] 00:11:22.861 } 00:11:22.861 ] 00:11:22.861 }' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:22.861 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 [2024-11-19 17:29:24.829863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:22.862 "tick_rate": 2300000000, 00:11:22.862 "poll_groups": [ 00:11:22.862 { 00:11:22.862 "name": "nvmf_tgt_poll_group_000", 00:11:22.862 "admin_qpairs": 0, 00:11:22.862 "io_qpairs": 0, 00:11:22.862 "current_admin_qpairs": 0, 00:11:22.862 "current_io_qpairs": 0, 00:11:22.862 "pending_bdev_io": 0, 00:11:22.862 "completed_nvme_io": 0, 00:11:22.862 "transports": [ 00:11:22.862 { 00:11:22.862 "trtype": "TCP" 00:11:22.862 } 00:11:22.862 ] 00:11:22.862 }, 00:11:22.862 { 00:11:22.862 "name": "nvmf_tgt_poll_group_001", 00:11:22.862 "admin_qpairs": 0, 00:11:22.862 "io_qpairs": 0, 00:11:22.862 "current_admin_qpairs": 0, 00:11:22.862 "current_io_qpairs": 0, 00:11:22.862 "pending_bdev_io": 0, 00:11:22.862 "completed_nvme_io": 0, 00:11:22.862 "transports": [ 00:11:22.862 { 00:11:22.862 "trtype": "TCP" 00:11:22.862 } 00:11:22.862 ] 00:11:22.862 }, 00:11:22.862 { 00:11:22.862 "name": "nvmf_tgt_poll_group_002", 00:11:22.862 "admin_qpairs": 0, 00:11:22.862 "io_qpairs": 0, 00:11:22.862 "current_admin_qpairs": 0, 00:11:22.862 "current_io_qpairs": 0, 00:11:22.862 "pending_bdev_io": 0, 00:11:22.862 "completed_nvme_io": 0, 00:11:22.862 "transports": [ 00:11:22.862 { 00:11:22.862 "trtype": "TCP" 00:11:22.862 } 00:11:22.862 ] 00:11:22.862 }, 00:11:22.862 { 00:11:22.862 "name": "nvmf_tgt_poll_group_003", 00:11:22.862 "admin_qpairs": 0, 00:11:22.862 "io_qpairs": 0, 00:11:22.862 "current_admin_qpairs": 0, 00:11:22.862 "current_io_qpairs": 0, 00:11:22.862 "pending_bdev_io": 0, 00:11:22.862 "completed_nvme_io": 0, 00:11:22.862 "transports": [ 00:11:22.862 { 00:11:22.862 "trtype": "TCP" 00:11:22.862 } 00:11:22.862 ] 00:11:22.862 } 00:11:22.862 ] 00:11:22.862 }' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 Malloc1 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 [2024-11-19 17:29:25.007015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:22.862 [2024-11-19 17:29:25.035545] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:22.862 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:22.862 could not add new controller: failed to write to nvme-fabrics device 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.862 17:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.241 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.241 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.241 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.241 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:24.241 17:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:26.147 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:26.147 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:26.147 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.147 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:26.147 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.147 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:26.148 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.148 [2024-11-19 17:29:28.349770] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:26.407 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:26.407 could not add new controller: failed to write to nvme-fabrics device 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.407 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.345 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.345 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.345 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.345 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.345 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 [2024-11-19 17:29:31.765381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.879 17:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.816 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.816 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:30.816 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.816 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:30.816 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:32.721 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:32.981 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 [2024-11-19 17:29:35.070549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.982 17:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.360 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.360 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.360 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.360 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.360 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 [2024-11-19 17:29:38.409938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 17:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.662 17:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.662 17:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.662 17:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.662 17:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.662 17:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.569 [2024-11-19 17:29:41.735104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.569 17:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.054 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.054 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.054 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.054 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.054 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:42.959 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.959 [2024-11-19 17:29:45.131113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.959 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.337 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.337 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.337 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.337 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.337 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.244 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 [2024-11-19 17:29:48.508538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 [2024-11-19 17:29:48.556575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 [2024-11-19 17:29:48.604713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.504 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 [2024-11-19 17:29:48.652890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 [2024-11-19 17:29:48.701050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.505 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.764 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:46.764 "tick_rate": 2300000000, 00:11:46.764 "poll_groups": [ 00:11:46.764 { 00:11:46.764 "name": "nvmf_tgt_poll_group_000", 00:11:46.764 "admin_qpairs": 2, 00:11:46.764 "io_qpairs": 168, 00:11:46.764 "current_admin_qpairs": 0, 00:11:46.764 "current_io_qpairs": 0, 00:11:46.764 "pending_bdev_io": 0, 00:11:46.764 "completed_nvme_io": 267, 00:11:46.764 "transports": [ 00:11:46.764 { 00:11:46.764 "trtype": "TCP" 00:11:46.764 } 00:11:46.764 ] 00:11:46.764 }, 00:11:46.764 { 00:11:46.764 "name": "nvmf_tgt_poll_group_001", 00:11:46.764 "admin_qpairs": 2, 00:11:46.764 "io_qpairs": 168, 00:11:46.764 "current_admin_qpairs": 0, 00:11:46.764 "current_io_qpairs": 0, 00:11:46.764 "pending_bdev_io": 0, 00:11:46.764 "completed_nvme_io": 217, 00:11:46.764 "transports": [ 00:11:46.764 { 00:11:46.764 "trtype": "TCP" 00:11:46.764 } 00:11:46.764 ] 00:11:46.764 }, 00:11:46.764 { 00:11:46.764 "name": "nvmf_tgt_poll_group_002", 00:11:46.764 "admin_qpairs": 1, 00:11:46.764 "io_qpairs": 168, 00:11:46.764 "current_admin_qpairs": 0, 00:11:46.764 "current_io_qpairs": 0, 00:11:46.764 "pending_bdev_io": 0, 00:11:46.764 "completed_nvme_io": 269, 00:11:46.764 "transports": [ 00:11:46.764 { 00:11:46.764 "trtype": "TCP" 00:11:46.764 } 00:11:46.764 ] 00:11:46.764 }, 00:11:46.764 { 00:11:46.764 "name": "nvmf_tgt_poll_group_003", 00:11:46.764 "admin_qpairs": 2, 00:11:46.764 "io_qpairs": 168, 00:11:46.764 "current_admin_qpairs": 0, 00:11:46.764 "current_io_qpairs": 0, 00:11:46.764 "pending_bdev_io": 0, 00:11:46.764 "completed_nvme_io": 269, 00:11:46.764 "transports": [ 00:11:46.764 { 00:11:46.764 "trtype": "TCP" 00:11:46.764 } 00:11:46.764 ] 00:11:46.765 } 00:11:46.765 ] 00:11:46.765 }' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.765 rmmod nvme_tcp 00:11:46.765 rmmod nvme_fabrics 00:11:46.765 rmmod nvme_keyring 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3387477 ']' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3387477 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3387477 ']' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3387477 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3387477 00:11:46.765 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.024 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.024 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3387477' 00:11:47.024 killing process with pid 3387477 00:11:47.024 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3387477 00:11:47.024 17:29:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3387477 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.024 17:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.562 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.562 00:11:49.562 real 0m33.005s 00:11:49.562 user 1m39.685s 00:11:49.562 sys 0m6.526s 00:11:49.562 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.562 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.562 ************************************ 00:11:49.562 END TEST nvmf_rpc 00:11:49.562 ************************************ 00:11:49.562 17:29:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.563 ************************************ 00:11:49.563 START TEST nvmf_invalid 00:11:49.563 ************************************ 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:49.563 * Looking for test storage... 00:11:49.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.563 --rc genhtml_branch_coverage=1 00:11:49.563 --rc genhtml_function_coverage=1 00:11:49.563 --rc genhtml_legend=1 00:11:49.563 --rc geninfo_all_blocks=1 00:11:49.563 --rc geninfo_unexecuted_blocks=1 00:11:49.563 00:11:49.563 ' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.563 --rc genhtml_branch_coverage=1 00:11:49.563 --rc genhtml_function_coverage=1 00:11:49.563 --rc genhtml_legend=1 00:11:49.563 --rc geninfo_all_blocks=1 00:11:49.563 --rc geninfo_unexecuted_blocks=1 00:11:49.563 00:11:49.563 ' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.563 --rc genhtml_branch_coverage=1 00:11:49.563 --rc genhtml_function_coverage=1 00:11:49.563 --rc genhtml_legend=1 00:11:49.563 --rc geninfo_all_blocks=1 00:11:49.563 --rc geninfo_unexecuted_blocks=1 00:11:49.563 00:11:49.563 ' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.563 --rc genhtml_branch_coverage=1 00:11:49.563 --rc genhtml_function_coverage=1 00:11:49.563 --rc genhtml_legend=1 00:11:49.563 --rc geninfo_all_blocks=1 00:11:49.563 --rc geninfo_unexecuted_blocks=1 00:11:49.563 00:11:49.563 ' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.563 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.564 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:56.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:56.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:56.139 Found net devices under 0000:86:00.0: cvl_0_0 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:56.139 Found net devices under 0000:86:00.1: cvl_0_1 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.139 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:56.140 00:11:56.140 --- 10.0.0.2 ping statistics --- 00:11:56.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.140 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:11:56.140 00:11:56.140 --- 10.0.0.1 ping statistics --- 00:11:56.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.140 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3395197 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3395197 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3395197 ']' 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:56.140 [2024-11-19 17:29:57.549661] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:11:56.140 [2024-11-19 17:29:57.549714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.140 [2024-11-19 17:29:57.632807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.140 [2024-11-19 17:29:57.676163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.140 [2024-11-19 17:29:57.676200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.140 [2024-11-19 17:29:57.676207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.140 [2024-11-19 17:29:57.676213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.140 [2024-11-19 17:29:57.676219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.140 [2024-11-19 17:29:57.677836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.140 [2024-11-19 17:29:57.677970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.140 [2024-11-19 17:29:57.678038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.140 [2024-11-19 17:29:57.678039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:56.140 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7786 00:11:56.140 [2024-11-19 17:29:57.984414] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:56.140 { 00:11:56.140 "nqn": "nqn.2016-06.io.spdk:cnode7786", 00:11:56.140 "tgt_name": "foobar", 00:11:56.140 "method": "nvmf_create_subsystem", 00:11:56.140 "req_id": 1 00:11:56.140 } 00:11:56.140 Got JSON-RPC error response 00:11:56.140 response: 00:11:56.140 { 00:11:56.140 "code": -32603, 00:11:56.140 "message": "Unable to find target foobar" 00:11:56.140 }' 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:56.140 { 00:11:56.140 "nqn": "nqn.2016-06.io.spdk:cnode7786", 00:11:56.140 "tgt_name": "foobar", 00:11:56.140 "method": "nvmf_create_subsystem", 00:11:56.140 "req_id": 1 00:11:56.140 } 00:11:56.140 Got JSON-RPC error response 00:11:56.140 response: 00:11:56.140 { 00:11:56.140 "code": -32603, 00:11:56.140 "message": "Unable to find target foobar" 00:11:56.140 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3974 00:11:56.140 [2024-11-19 17:29:58.189127] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3974: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:56.140 { 00:11:56.140 "nqn": "nqn.2016-06.io.spdk:cnode3974", 00:11:56.140 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:56.140 "method": "nvmf_create_subsystem", 00:11:56.140 "req_id": 1 00:11:56.140 } 00:11:56.140 Got JSON-RPC error response 00:11:56.140 response: 00:11:56.140 { 00:11:56.140 "code": -32602, 00:11:56.140 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:56.140 }' 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:56.140 { 00:11:56.140 "nqn": "nqn.2016-06.io.spdk:cnode3974", 00:11:56.140 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:56.140 "method": "nvmf_create_subsystem", 00:11:56.140 "req_id": 1 00:11:56.140 } 00:11:56.140 Got JSON-RPC error response 00:11:56.140 response: 00:11:56.140 { 00:11:56.140 "code": -32602, 00:11:56.140 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:56.140 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:56.140 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3577 00:11:56.400 [2024-11-19 17:29:58.393815] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3577: invalid model number 'SPDK_Controller' 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:56.400 { 00:11:56.400 "nqn": "nqn.2016-06.io.spdk:cnode3577", 00:11:56.400 "model_number": "SPDK_Controller\u001f", 00:11:56.400 "method": "nvmf_create_subsystem", 00:11:56.400 "req_id": 1 00:11:56.400 } 00:11:56.400 Got JSON-RPC error response 00:11:56.400 response: 00:11:56.400 { 00:11:56.400 "code": -32602, 00:11:56.400 "message": "Invalid MN SPDK_Controller\u001f" 00:11:56.400 }' 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:56.400 { 00:11:56.400 "nqn": "nqn.2016-06.io.spdk:cnode3577", 00:11:56.400 "model_number": "SPDK_Controller\u001f", 00:11:56.400 "method": "nvmf_create_subsystem", 00:11:56.400 "req_id": 1 00:11:56.400 } 00:11:56.400 Got JSON-RPC error response 00:11:56.400 response: 00:11:56.400 { 00:11:56.400 "code": -32602, 00:11:56.400 "message": "Invalid MN SPDK_Controller\u001f" 00:11:56.400 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:56.400 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo eUpBgcsrjS26EFCp-6gl+ 00:11:56.401 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s eUpBgcsrjS26EFCp-6gl+ nqn.2016-06.io.spdk:cnode15685 00:11:56.662 [2024-11-19 17:29:58.735004] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15685: invalid serial number 'eUpBgcsrjS26EFCp-6gl+' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:56.662 { 00:11:56.662 "nqn": "nqn.2016-06.io.spdk:cnode15685", 00:11:56.662 "serial_number": "eUpBgcsrjS26EFCp-6gl+", 00:11:56.662 "method": "nvmf_create_subsystem", 00:11:56.662 "req_id": 1 00:11:56.662 } 00:11:56.662 Got JSON-RPC error response 00:11:56.662 response: 00:11:56.662 { 00:11:56.662 "code": -32602, 00:11:56.662 "message": "Invalid SN eUpBgcsrjS26EFCp-6gl+" 00:11:56.662 }' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:56.662 { 00:11:56.662 "nqn": "nqn.2016-06.io.spdk:cnode15685", 00:11:56.662 "serial_number": "eUpBgcsrjS26EFCp-6gl+", 00:11:56.662 "method": "nvmf_create_subsystem", 00:11:56.662 "req_id": 1 00:11:56.662 } 00:11:56.662 Got JSON-RPC error response 00:11:56.662 response: 00:11:56.662 { 00:11:56.662 "code": -32602, 00:11:56.662 "message": "Invalid SN eUpBgcsrjS26EFCp-6gl+" 00:11:56.662 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:56.662 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.663 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:56.923 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:11:56.924 17:29:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=_`=se.IqvEHa,^I[m\U3L /dev/null' 00:11:59.257 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.163 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.163 00:12:01.163 real 0m12.037s 00:12:01.163 user 0m18.698s 00:12:01.163 sys 0m5.352s 00:12:01.163 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.163 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:01.163 ************************************ 00:12:01.163 END TEST nvmf_invalid 00:12:01.163 ************************************ 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.423 ************************************ 00:12:01.423 START TEST nvmf_connect_stress 00:12:01.423 ************************************ 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.423 * Looking for test storage... 00:12:01.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.423 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.424 --rc genhtml_branch_coverage=1 00:12:01.424 --rc genhtml_function_coverage=1 00:12:01.424 --rc genhtml_legend=1 00:12:01.424 --rc geninfo_all_blocks=1 00:12:01.424 --rc geninfo_unexecuted_blocks=1 00:12:01.424 00:12:01.424 ' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.424 --rc genhtml_branch_coverage=1 00:12:01.424 --rc genhtml_function_coverage=1 00:12:01.424 --rc genhtml_legend=1 00:12:01.424 --rc geninfo_all_blocks=1 00:12:01.424 --rc geninfo_unexecuted_blocks=1 00:12:01.424 00:12:01.424 ' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.424 --rc genhtml_branch_coverage=1 00:12:01.424 --rc genhtml_function_coverage=1 00:12:01.424 --rc genhtml_legend=1 00:12:01.424 --rc geninfo_all_blocks=1 00:12:01.424 --rc geninfo_unexecuted_blocks=1 00:12:01.424 00:12:01.424 ' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.424 --rc genhtml_branch_coverage=1 00:12:01.424 --rc genhtml_function_coverage=1 00:12:01.424 --rc genhtml_legend=1 00:12:01.424 --rc geninfo_all_blocks=1 00:12:01.424 --rc geninfo_unexecuted_blocks=1 00:12:01.424 00:12:01.424 ' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.424 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.996 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.996 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.996 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.996 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.996 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:12:07.997 00:12:07.997 --- 10.0.0.2 ping statistics --- 00:12:07.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.997 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:07.997 00:12:07.997 --- 10.0.0.1 ping statistics --- 00:12:07.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.997 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3399984 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3399984 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3399984 ']' 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 [2024-11-19 17:30:09.656653] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:12:07.997 [2024-11-19 17:30:09.656700] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.997 [2024-11-19 17:30:09.737904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.997 [2024-11-19 17:30:09.779779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.997 [2024-11-19 17:30:09.779817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.997 [2024-11-19 17:30:09.779824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.997 [2024-11-19 17:30:09.779830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.997 [2024-11-19 17:30:09.779835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.997 [2024-11-19 17:30:09.781203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.997 [2024-11-19 17:30:09.781308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.997 [2024-11-19 17:30:09.781309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 [2024-11-19 17:30:09.918045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 [2024-11-19 17:30:09.938251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.997 NULL1 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3400006 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.997 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.998 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.257 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.257 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:08.257 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.257 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.257 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.517 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.517 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:08.517 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.517 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.517 17:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.085 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.085 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:09.085 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.085 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.085 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.346 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.346 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:09.346 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.346 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.346 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.607 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.607 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:09.607 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.607 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.607 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.866 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.866 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:09.866 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.866 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.866 17:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.125 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.125 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:10.125 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.125 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.125 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.693 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.693 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:10.693 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.693 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.693 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.952 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.952 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:10.952 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.952 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.952 17:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.211 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.211 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:11.211 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.211 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.211 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.471 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.471 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:11.471 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.471 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.471 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.730 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.730 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:11.730 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.730 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.730 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.298 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.298 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:12.299 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.299 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.299 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.559 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.559 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:12.559 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.559 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.559 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.819 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.819 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:12.819 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.819 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.819 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.078 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.078 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:13.079 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.079 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.079 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.647 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.647 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:13.647 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.647 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.647 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.906 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.907 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:13.907 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.907 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.907 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.166 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.166 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:14.166 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.166 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.166 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.425 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.425 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:14.425 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.425 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.425 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.685 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.685 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:14.685 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.685 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.685 17:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.281 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.281 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:15.281 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.281 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.281 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.556 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.556 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:15.556 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.556 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.556 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.815 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:15.815 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.815 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.074 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:16.074 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.074 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.074 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.333 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.333 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:16.333 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.333 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.333 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.902 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.902 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:16.902 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.902 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.902 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.168 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.168 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:17.168 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.168 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.168 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.427 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.427 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:17.427 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.427 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.427 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.686 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.686 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:17.686 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.686 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.686 17:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.944 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3400006 00:12:17.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3400006) - No such process 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3400006 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.944 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.944 rmmod nvme_tcp 00:12:17.944 rmmod nvme_fabrics 00:12:18.203 rmmod nvme_keyring 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3399984 ']' 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3399984 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3399984 ']' 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3399984 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399984 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399984' 00:12:18.203 killing process with pid 3399984 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3399984 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3399984 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.203 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.463 17:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.370 00:12:20.370 real 0m19.078s 00:12:20.370 user 0m39.302s 00:12:20.370 sys 0m8.638s 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.370 ************************************ 00:12:20.370 END TEST nvmf_connect_stress 00:12:20.370 ************************************ 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.370 ************************************ 00:12:20.370 START TEST nvmf_fused_ordering 00:12:20.370 ************************************ 00:12:20.370 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:20.630 * Looking for test storage... 00:12:20.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:20.630 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.631 --rc genhtml_branch_coverage=1 00:12:20.631 --rc genhtml_function_coverage=1 00:12:20.631 --rc genhtml_legend=1 00:12:20.631 --rc geninfo_all_blocks=1 00:12:20.631 --rc geninfo_unexecuted_blocks=1 00:12:20.631 00:12:20.631 ' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.631 --rc genhtml_branch_coverage=1 00:12:20.631 --rc genhtml_function_coverage=1 00:12:20.631 --rc genhtml_legend=1 00:12:20.631 --rc geninfo_all_blocks=1 00:12:20.631 --rc geninfo_unexecuted_blocks=1 00:12:20.631 00:12:20.631 ' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.631 --rc genhtml_branch_coverage=1 00:12:20.631 --rc genhtml_function_coverage=1 00:12:20.631 --rc genhtml_legend=1 00:12:20.631 --rc geninfo_all_blocks=1 00:12:20.631 --rc geninfo_unexecuted_blocks=1 00:12:20.631 00:12:20.631 ' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.631 --rc genhtml_branch_coverage=1 00:12:20.631 --rc genhtml_function_coverage=1 00:12:20.631 --rc genhtml_legend=1 00:12:20.631 --rc geninfo_all_blocks=1 00:12:20.631 --rc geninfo_unexecuted_blocks=1 00:12:20.631 00:12:20.631 ' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.631 17:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.203 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.203 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.203 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.203 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.203 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:12:27.203 00:12:27.204 --- 10.0.0.2 ping statistics --- 00:12:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.204 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:12:27.204 00:12:27.204 --- 10.0.0.1 ping statistics --- 00:12:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.204 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3405364 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3405364 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3405364 ']' 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.204 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 [2024-11-19 17:30:28.824066] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:12:27.204 [2024-11-19 17:30:28.824133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.204 [2024-11-19 17:30:28.905408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.204 [2024-11-19 17:30:28.946546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.204 [2024-11-19 17:30:28.946581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.204 [2024-11-19 17:30:28.946587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.204 [2024-11-19 17:30:28.946594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.204 [2024-11-19 17:30:28.946599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.204 [2024-11-19 17:30:28.947195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 [2024-11-19 17:30:29.082744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 [2024-11-19 17:30:29.102931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 NULL1 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:27.204 [2024-11-19 17:30:29.156140] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:12:27.204 [2024-11-19 17:30:29.156171] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405404 ] 00:12:27.463 Attached to nqn.2016-06.io.spdk:cnode1 00:12:27.463 Namespace ID: 1 size: 1GB 00:12:27.463 fused_ordering(0) 00:12:27.463 fused_ordering(1) 00:12:27.463 fused_ordering(2) 00:12:27.463 fused_ordering(3) 00:12:27.463 fused_ordering(4) 00:12:27.463 fused_ordering(5) 00:12:27.463 fused_ordering(6) 00:12:27.463 fused_ordering(7) 00:12:27.463 fused_ordering(8) 00:12:27.463 fused_ordering(9) 00:12:27.463 fused_ordering(10) 00:12:27.463 fused_ordering(11) 00:12:27.463 fused_ordering(12) 00:12:27.463 fused_ordering(13) 00:12:27.463 fused_ordering(14) 00:12:27.463 fused_ordering(15) 00:12:27.463 fused_ordering(16) 00:12:27.463 fused_ordering(17) 00:12:27.463 fused_ordering(18) 00:12:27.463 fused_ordering(19) 00:12:27.463 fused_ordering(20) 00:12:27.463 fused_ordering(21) 00:12:27.463 fused_ordering(22) 00:12:27.463 fused_ordering(23) 00:12:27.463 fused_ordering(24) 00:12:27.463 fused_ordering(25) 00:12:27.463 fused_ordering(26) 00:12:27.463 fused_ordering(27) 00:12:27.463 fused_ordering(28) 00:12:27.463 fused_ordering(29) 00:12:27.463 fused_ordering(30) 00:12:27.463 fused_ordering(31) 00:12:27.463 fused_ordering(32) 00:12:27.464 fused_ordering(33) 00:12:27.464 fused_ordering(34) 00:12:27.464 fused_ordering(35) 00:12:27.464 fused_ordering(36) 00:12:27.464 fused_ordering(37) 00:12:27.464 fused_ordering(38) 00:12:27.464 fused_ordering(39) 00:12:27.464 fused_ordering(40) 00:12:27.464 fused_ordering(41) 00:12:27.464 fused_ordering(42) 00:12:27.464 fused_ordering(43) 00:12:27.464 fused_ordering(44) 00:12:27.464 fused_ordering(45) 00:12:27.464 fused_ordering(46) 00:12:27.464 fused_ordering(47) 00:12:27.464 fused_ordering(48) 00:12:27.464 fused_ordering(49) 00:12:27.464 fused_ordering(50) 00:12:27.464 fused_ordering(51) 00:12:27.464 fused_ordering(52) 00:12:27.464 fused_ordering(53) 00:12:27.464 fused_ordering(54) 00:12:27.464 fused_ordering(55) 00:12:27.464 fused_ordering(56) 00:12:27.464 fused_ordering(57) 00:12:27.464 fused_ordering(58) 00:12:27.464 fused_ordering(59) 00:12:27.464 fused_ordering(60) 00:12:27.464 fused_ordering(61) 00:12:27.464 fused_ordering(62) 00:12:27.464 fused_ordering(63) 00:12:27.464 fused_ordering(64) 00:12:27.464 fused_ordering(65) 00:12:27.464 fused_ordering(66) 00:12:27.464 fused_ordering(67) 00:12:27.464 fused_ordering(68) 00:12:27.464 fused_ordering(69) 00:12:27.464 fused_ordering(70) 00:12:27.464 fused_ordering(71) 00:12:27.464 fused_ordering(72) 00:12:27.464 fused_ordering(73) 00:12:27.464 fused_ordering(74) 00:12:27.464 fused_ordering(75) 00:12:27.464 fused_ordering(76) 00:12:27.464 fused_ordering(77) 00:12:27.464 fused_ordering(78) 00:12:27.464 fused_ordering(79) 00:12:27.464 fused_ordering(80) 00:12:27.464 fused_ordering(81) 00:12:27.464 fused_ordering(82) 00:12:27.464 fused_ordering(83) 00:12:27.464 fused_ordering(84) 00:12:27.464 fused_ordering(85) 00:12:27.464 fused_ordering(86) 00:12:27.464 fused_ordering(87) 00:12:27.464 fused_ordering(88) 00:12:27.464 fused_ordering(89) 00:12:27.464 fused_ordering(90) 00:12:27.464 fused_ordering(91) 00:12:27.464 fused_ordering(92) 00:12:27.464 fused_ordering(93) 00:12:27.464 fused_ordering(94) 00:12:27.464 fused_ordering(95) 00:12:27.464 fused_ordering(96) 00:12:27.464 fused_ordering(97) 00:12:27.464 fused_ordering(98) 00:12:27.464 fused_ordering(99) 00:12:27.464 fused_ordering(100) 00:12:27.464 fused_ordering(101) 00:12:27.464 fused_ordering(102) 00:12:27.464 fused_ordering(103) 00:12:27.464 fused_ordering(104) 00:12:27.464 fused_ordering(105) 00:12:27.464 fused_ordering(106) 00:12:27.464 fused_ordering(107) 00:12:27.464 fused_ordering(108) 00:12:27.464 fused_ordering(109) 00:12:27.464 fused_ordering(110) 00:12:27.464 fused_ordering(111) 00:12:27.464 fused_ordering(112) 00:12:27.464 fused_ordering(113) 00:12:27.464 fused_ordering(114) 00:12:27.464 fused_ordering(115) 00:12:27.464 fused_ordering(116) 00:12:27.464 fused_ordering(117) 00:12:27.464 fused_ordering(118) 00:12:27.464 fused_ordering(119) 00:12:27.464 fused_ordering(120) 00:12:27.464 fused_ordering(121) 00:12:27.464 fused_ordering(122) 00:12:27.464 fused_ordering(123) 00:12:27.464 fused_ordering(124) 00:12:27.464 fused_ordering(125) 00:12:27.464 fused_ordering(126) 00:12:27.464 fused_ordering(127) 00:12:27.464 fused_ordering(128) 00:12:27.464 fused_ordering(129) 00:12:27.464 fused_ordering(130) 00:12:27.464 fused_ordering(131) 00:12:27.464 fused_ordering(132) 00:12:27.464 fused_ordering(133) 00:12:27.464 fused_ordering(134) 00:12:27.464 fused_ordering(135) 00:12:27.464 fused_ordering(136) 00:12:27.464 fused_ordering(137) 00:12:27.464 fused_ordering(138) 00:12:27.464 fused_ordering(139) 00:12:27.464 fused_ordering(140) 00:12:27.464 fused_ordering(141) 00:12:27.464 fused_ordering(142) 00:12:27.464 fused_ordering(143) 00:12:27.464 fused_ordering(144) 00:12:27.464 fused_ordering(145) 00:12:27.464 fused_ordering(146) 00:12:27.464 fused_ordering(147) 00:12:27.464 fused_ordering(148) 00:12:27.464 fused_ordering(149) 00:12:27.464 fused_ordering(150) 00:12:27.464 fused_ordering(151) 00:12:27.464 fused_ordering(152) 00:12:27.464 fused_ordering(153) 00:12:27.464 fused_ordering(154) 00:12:27.464 fused_ordering(155) 00:12:27.464 fused_ordering(156) 00:12:27.464 fused_ordering(157) 00:12:27.464 fused_ordering(158) 00:12:27.464 fused_ordering(159) 00:12:27.464 fused_ordering(160) 00:12:27.464 fused_ordering(161) 00:12:27.464 fused_ordering(162) 00:12:27.464 fused_ordering(163) 00:12:27.464 fused_ordering(164) 00:12:27.464 fused_ordering(165) 00:12:27.464 fused_ordering(166) 00:12:27.464 fused_ordering(167) 00:12:27.464 fused_ordering(168) 00:12:27.464 fused_ordering(169) 00:12:27.464 fused_ordering(170) 00:12:27.464 fused_ordering(171) 00:12:27.464 fused_ordering(172) 00:12:27.464 fused_ordering(173) 00:12:27.464 fused_ordering(174) 00:12:27.464 fused_ordering(175) 00:12:27.464 fused_ordering(176) 00:12:27.464 fused_ordering(177) 00:12:27.464 fused_ordering(178) 00:12:27.464 fused_ordering(179) 00:12:27.464 fused_ordering(180) 00:12:27.464 fused_ordering(181) 00:12:27.464 fused_ordering(182) 00:12:27.464 fused_ordering(183) 00:12:27.464 fused_ordering(184) 00:12:27.464 fused_ordering(185) 00:12:27.464 fused_ordering(186) 00:12:27.464 fused_ordering(187) 00:12:27.464 fused_ordering(188) 00:12:27.464 fused_ordering(189) 00:12:27.464 fused_ordering(190) 00:12:27.464 fused_ordering(191) 00:12:27.464 fused_ordering(192) 00:12:27.464 fused_ordering(193) 00:12:27.464 fused_ordering(194) 00:12:27.464 fused_ordering(195) 00:12:27.464 fused_ordering(196) 00:12:27.464 fused_ordering(197) 00:12:27.464 fused_ordering(198) 00:12:27.464 fused_ordering(199) 00:12:27.464 fused_ordering(200) 00:12:27.464 fused_ordering(201) 00:12:27.464 fused_ordering(202) 00:12:27.464 fused_ordering(203) 00:12:27.464 fused_ordering(204) 00:12:27.464 fused_ordering(205) 00:12:27.723 fused_ordering(206) 00:12:27.723 fused_ordering(207) 00:12:27.723 fused_ordering(208) 00:12:27.723 fused_ordering(209) 00:12:27.723 fused_ordering(210) 00:12:27.723 fused_ordering(211) 00:12:27.723 fused_ordering(212) 00:12:27.723 fused_ordering(213) 00:12:27.723 fused_ordering(214) 00:12:27.723 fused_ordering(215) 00:12:27.723 fused_ordering(216) 00:12:27.723 fused_ordering(217) 00:12:27.723 fused_ordering(218) 00:12:27.723 fused_ordering(219) 00:12:27.723 fused_ordering(220) 00:12:27.723 fused_ordering(221) 00:12:27.723 fused_ordering(222) 00:12:27.723 fused_ordering(223) 00:12:27.723 fused_ordering(224) 00:12:27.723 fused_ordering(225) 00:12:27.723 fused_ordering(226) 00:12:27.723 fused_ordering(227) 00:12:27.723 fused_ordering(228) 00:12:27.723 fused_ordering(229) 00:12:27.723 fused_ordering(230) 00:12:27.723 fused_ordering(231) 00:12:27.723 fused_ordering(232) 00:12:27.723 fused_ordering(233) 00:12:27.723 fused_ordering(234) 00:12:27.723 fused_ordering(235) 00:12:27.723 fused_ordering(236) 00:12:27.723 fused_ordering(237) 00:12:27.723 fused_ordering(238) 00:12:27.723 fused_ordering(239) 00:12:27.723 fused_ordering(240) 00:12:27.723 fused_ordering(241) 00:12:27.723 fused_ordering(242) 00:12:27.723 fused_ordering(243) 00:12:27.723 fused_ordering(244) 00:12:27.723 fused_ordering(245) 00:12:27.723 fused_ordering(246) 00:12:27.723 fused_ordering(247) 00:12:27.723 fused_ordering(248) 00:12:27.723 fused_ordering(249) 00:12:27.723 fused_ordering(250) 00:12:27.723 fused_ordering(251) 00:12:27.723 fused_ordering(252) 00:12:27.723 fused_ordering(253) 00:12:27.723 fused_ordering(254) 00:12:27.723 fused_ordering(255) 00:12:27.723 fused_ordering(256) 00:12:27.723 fused_ordering(257) 00:12:27.723 fused_ordering(258) 00:12:27.723 fused_ordering(259) 00:12:27.723 fused_ordering(260) 00:12:27.723 fused_ordering(261) 00:12:27.723 fused_ordering(262) 00:12:27.723 fused_ordering(263) 00:12:27.723 fused_ordering(264) 00:12:27.723 fused_ordering(265) 00:12:27.723 fused_ordering(266) 00:12:27.723 fused_ordering(267) 00:12:27.723 fused_ordering(268) 00:12:27.723 fused_ordering(269) 00:12:27.723 fused_ordering(270) 00:12:27.723 fused_ordering(271) 00:12:27.723 fused_ordering(272) 00:12:27.723 fused_ordering(273) 00:12:27.723 fused_ordering(274) 00:12:27.723 fused_ordering(275) 00:12:27.723 fused_ordering(276) 00:12:27.723 fused_ordering(277) 00:12:27.723 fused_ordering(278) 00:12:27.723 fused_ordering(279) 00:12:27.723 fused_ordering(280) 00:12:27.723 fused_ordering(281) 00:12:27.723 fused_ordering(282) 00:12:27.723 fused_ordering(283) 00:12:27.723 fused_ordering(284) 00:12:27.723 fused_ordering(285) 00:12:27.723 fused_ordering(286) 00:12:27.723 fused_ordering(287) 00:12:27.723 fused_ordering(288) 00:12:27.723 fused_ordering(289) 00:12:27.723 fused_ordering(290) 00:12:27.723 fused_ordering(291) 00:12:27.723 fused_ordering(292) 00:12:27.723 fused_ordering(293) 00:12:27.723 fused_ordering(294) 00:12:27.723 fused_ordering(295) 00:12:27.723 fused_ordering(296) 00:12:27.723 fused_ordering(297) 00:12:27.723 fused_ordering(298) 00:12:27.723 fused_ordering(299) 00:12:27.723 fused_ordering(300) 00:12:27.723 fused_ordering(301) 00:12:27.723 fused_ordering(302) 00:12:27.723 fused_ordering(303) 00:12:27.723 fused_ordering(304) 00:12:27.723 fused_ordering(305) 00:12:27.723 fused_ordering(306) 00:12:27.723 fused_ordering(307) 00:12:27.723 fused_ordering(308) 00:12:27.723 fused_ordering(309) 00:12:27.723 fused_ordering(310) 00:12:27.723 fused_ordering(311) 00:12:27.723 fused_ordering(312) 00:12:27.723 fused_ordering(313) 00:12:27.723 fused_ordering(314) 00:12:27.723 fused_ordering(315) 00:12:27.723 fused_ordering(316) 00:12:27.723 fused_ordering(317) 00:12:27.723 fused_ordering(318) 00:12:27.723 fused_ordering(319) 00:12:27.723 fused_ordering(320) 00:12:27.723 fused_ordering(321) 00:12:27.723 fused_ordering(322) 00:12:27.723 fused_ordering(323) 00:12:27.723 fused_ordering(324) 00:12:27.723 fused_ordering(325) 00:12:27.723 fused_ordering(326) 00:12:27.723 fused_ordering(327) 00:12:27.723 fused_ordering(328) 00:12:27.723 fused_ordering(329) 00:12:27.723 fused_ordering(330) 00:12:27.723 fused_ordering(331) 00:12:27.723 fused_ordering(332) 00:12:27.723 fused_ordering(333) 00:12:27.723 fused_ordering(334) 00:12:27.723 fused_ordering(335) 00:12:27.723 fused_ordering(336) 00:12:27.723 fused_ordering(337) 00:12:27.723 fused_ordering(338) 00:12:27.723 fused_ordering(339) 00:12:27.723 fused_ordering(340) 00:12:27.723 fused_ordering(341) 00:12:27.723 fused_ordering(342) 00:12:27.723 fused_ordering(343) 00:12:27.723 fused_ordering(344) 00:12:27.723 fused_ordering(345) 00:12:27.723 fused_ordering(346) 00:12:27.723 fused_ordering(347) 00:12:27.723 fused_ordering(348) 00:12:27.723 fused_ordering(349) 00:12:27.723 fused_ordering(350) 00:12:27.723 fused_ordering(351) 00:12:27.723 fused_ordering(352) 00:12:27.723 fused_ordering(353) 00:12:27.723 fused_ordering(354) 00:12:27.723 fused_ordering(355) 00:12:27.723 fused_ordering(356) 00:12:27.723 fused_ordering(357) 00:12:27.724 fused_ordering(358) 00:12:27.724 fused_ordering(359) 00:12:27.724 fused_ordering(360) 00:12:27.724 fused_ordering(361) 00:12:27.724 fused_ordering(362) 00:12:27.724 fused_ordering(363) 00:12:27.724 fused_ordering(364) 00:12:27.724 fused_ordering(365) 00:12:27.724 fused_ordering(366) 00:12:27.724 fused_ordering(367) 00:12:27.724 fused_ordering(368) 00:12:27.724 fused_ordering(369) 00:12:27.724 fused_ordering(370) 00:12:27.724 fused_ordering(371) 00:12:27.724 fused_ordering(372) 00:12:27.724 fused_ordering(373) 00:12:27.724 fused_ordering(374) 00:12:27.724 fused_ordering(375) 00:12:27.724 fused_ordering(376) 00:12:27.724 fused_ordering(377) 00:12:27.724 fused_ordering(378) 00:12:27.724 fused_ordering(379) 00:12:27.724 fused_ordering(380) 00:12:27.724 fused_ordering(381) 00:12:27.724 fused_ordering(382) 00:12:27.724 fused_ordering(383) 00:12:27.724 fused_ordering(384) 00:12:27.724 fused_ordering(385) 00:12:27.724 fused_ordering(386) 00:12:27.724 fused_ordering(387) 00:12:27.724 fused_ordering(388) 00:12:27.724 fused_ordering(389) 00:12:27.724 fused_ordering(390) 00:12:27.724 fused_ordering(391) 00:12:27.724 fused_ordering(392) 00:12:27.724 fused_ordering(393) 00:12:27.724 fused_ordering(394) 00:12:27.724 fused_ordering(395) 00:12:27.724 fused_ordering(396) 00:12:27.724 fused_ordering(397) 00:12:27.724 fused_ordering(398) 00:12:27.724 fused_ordering(399) 00:12:27.724 fused_ordering(400) 00:12:27.724 fused_ordering(401) 00:12:27.724 fused_ordering(402) 00:12:27.724 fused_ordering(403) 00:12:27.724 fused_ordering(404) 00:12:27.724 fused_ordering(405) 00:12:27.724 fused_ordering(406) 00:12:27.724 fused_ordering(407) 00:12:27.724 fused_ordering(408) 00:12:27.724 fused_ordering(409) 00:12:27.724 fused_ordering(410) 00:12:27.984 fused_ordering(411) 00:12:27.984 fused_ordering(412) 00:12:27.984 fused_ordering(413) 00:12:27.984 fused_ordering(414) 00:12:27.984 fused_ordering(415) 00:12:27.984 fused_ordering(416) 00:12:27.984 fused_ordering(417) 00:12:27.984 fused_ordering(418) 00:12:27.984 fused_ordering(419) 00:12:27.984 fused_ordering(420) 00:12:27.984 fused_ordering(421) 00:12:27.984 fused_ordering(422) 00:12:27.984 fused_ordering(423) 00:12:27.984 fused_ordering(424) 00:12:27.984 fused_ordering(425) 00:12:27.984 fused_ordering(426) 00:12:27.984 fused_ordering(427) 00:12:27.984 fused_ordering(428) 00:12:27.984 fused_ordering(429) 00:12:27.984 fused_ordering(430) 00:12:27.984 fused_ordering(431) 00:12:27.984 fused_ordering(432) 00:12:27.984 fused_ordering(433) 00:12:27.984 fused_ordering(434) 00:12:27.984 fused_ordering(435) 00:12:27.984 fused_ordering(436) 00:12:27.984 fused_ordering(437) 00:12:27.984 fused_ordering(438) 00:12:27.984 fused_ordering(439) 00:12:27.984 fused_ordering(440) 00:12:27.984 fused_ordering(441) 00:12:27.984 fused_ordering(442) 00:12:27.984 fused_ordering(443) 00:12:27.984 fused_ordering(444) 00:12:27.984 fused_ordering(445) 00:12:27.984 fused_ordering(446) 00:12:27.984 fused_ordering(447) 00:12:27.984 fused_ordering(448) 00:12:27.984 fused_ordering(449) 00:12:27.984 fused_ordering(450) 00:12:27.984 fused_ordering(451) 00:12:27.984 fused_ordering(452) 00:12:27.984 fused_ordering(453) 00:12:27.984 fused_ordering(454) 00:12:27.984 fused_ordering(455) 00:12:27.984 fused_ordering(456) 00:12:27.984 fused_ordering(457) 00:12:27.984 fused_ordering(458) 00:12:27.984 fused_ordering(459) 00:12:27.984 fused_ordering(460) 00:12:27.984 fused_ordering(461) 00:12:27.984 fused_ordering(462) 00:12:27.984 fused_ordering(463) 00:12:27.984 fused_ordering(464) 00:12:27.984 fused_ordering(465) 00:12:27.984 fused_ordering(466) 00:12:27.984 fused_ordering(467) 00:12:27.984 fused_ordering(468) 00:12:27.984 fused_ordering(469) 00:12:27.984 fused_ordering(470) 00:12:27.984 fused_ordering(471) 00:12:27.984 fused_ordering(472) 00:12:27.984 fused_ordering(473) 00:12:27.984 fused_ordering(474) 00:12:27.984 fused_ordering(475) 00:12:27.984 fused_ordering(476) 00:12:27.984 fused_ordering(477) 00:12:27.984 fused_ordering(478) 00:12:27.984 fused_ordering(479) 00:12:27.984 fused_ordering(480) 00:12:27.984 fused_ordering(481) 00:12:27.984 fused_ordering(482) 00:12:27.984 fused_ordering(483) 00:12:27.984 fused_ordering(484) 00:12:27.984 fused_ordering(485) 00:12:27.984 fused_ordering(486) 00:12:27.984 fused_ordering(487) 00:12:27.984 fused_ordering(488) 00:12:27.984 fused_ordering(489) 00:12:27.984 fused_ordering(490) 00:12:27.984 fused_ordering(491) 00:12:27.984 fused_ordering(492) 00:12:27.984 fused_ordering(493) 00:12:27.984 fused_ordering(494) 00:12:27.984 fused_ordering(495) 00:12:27.984 fused_ordering(496) 00:12:27.984 fused_ordering(497) 00:12:27.984 fused_ordering(498) 00:12:27.984 fused_ordering(499) 00:12:27.984 fused_ordering(500) 00:12:27.984 fused_ordering(501) 00:12:27.984 fused_ordering(502) 00:12:27.984 fused_ordering(503) 00:12:27.984 fused_ordering(504) 00:12:27.984 fused_ordering(505) 00:12:27.984 fused_ordering(506) 00:12:27.984 fused_ordering(507) 00:12:27.984 fused_ordering(508) 00:12:27.984 fused_ordering(509) 00:12:27.984 fused_ordering(510) 00:12:27.984 fused_ordering(511) 00:12:27.984 fused_ordering(512) 00:12:27.984 fused_ordering(513) 00:12:27.984 fused_ordering(514) 00:12:27.984 fused_ordering(515) 00:12:27.984 fused_ordering(516) 00:12:27.984 fused_ordering(517) 00:12:27.984 fused_ordering(518) 00:12:27.984 fused_ordering(519) 00:12:27.984 fused_ordering(520) 00:12:27.984 fused_ordering(521) 00:12:27.984 fused_ordering(522) 00:12:27.984 fused_ordering(523) 00:12:27.984 fused_ordering(524) 00:12:27.984 fused_ordering(525) 00:12:27.984 fused_ordering(526) 00:12:27.984 fused_ordering(527) 00:12:27.984 fused_ordering(528) 00:12:27.984 fused_ordering(529) 00:12:27.984 fused_ordering(530) 00:12:27.984 fused_ordering(531) 00:12:27.984 fused_ordering(532) 00:12:27.984 fused_ordering(533) 00:12:27.984 fused_ordering(534) 00:12:27.984 fused_ordering(535) 00:12:27.984 fused_ordering(536) 00:12:27.984 fused_ordering(537) 00:12:27.984 fused_ordering(538) 00:12:27.984 fused_ordering(539) 00:12:27.984 fused_ordering(540) 00:12:27.984 fused_ordering(541) 00:12:27.984 fused_ordering(542) 00:12:27.984 fused_ordering(543) 00:12:27.984 fused_ordering(544) 00:12:27.984 fused_ordering(545) 00:12:27.984 fused_ordering(546) 00:12:27.984 fused_ordering(547) 00:12:27.984 fused_ordering(548) 00:12:27.984 fused_ordering(549) 00:12:27.984 fused_ordering(550) 00:12:27.984 fused_ordering(551) 00:12:27.984 fused_ordering(552) 00:12:27.984 fused_ordering(553) 00:12:27.984 fused_ordering(554) 00:12:27.984 fused_ordering(555) 00:12:27.984 fused_ordering(556) 00:12:27.984 fused_ordering(557) 00:12:27.984 fused_ordering(558) 00:12:27.984 fused_ordering(559) 00:12:27.984 fused_ordering(560) 00:12:27.984 fused_ordering(561) 00:12:27.984 fused_ordering(562) 00:12:27.984 fused_ordering(563) 00:12:27.984 fused_ordering(564) 00:12:27.984 fused_ordering(565) 00:12:27.984 fused_ordering(566) 00:12:27.984 fused_ordering(567) 00:12:27.984 fused_ordering(568) 00:12:27.984 fused_ordering(569) 00:12:27.984 fused_ordering(570) 00:12:27.984 fused_ordering(571) 00:12:27.984 fused_ordering(572) 00:12:27.984 fused_ordering(573) 00:12:27.984 fused_ordering(574) 00:12:27.984 fused_ordering(575) 00:12:27.984 fused_ordering(576) 00:12:27.984 fused_ordering(577) 00:12:27.984 fused_ordering(578) 00:12:27.984 fused_ordering(579) 00:12:27.984 fused_ordering(580) 00:12:27.984 fused_ordering(581) 00:12:27.984 fused_ordering(582) 00:12:27.984 fused_ordering(583) 00:12:27.984 fused_ordering(584) 00:12:27.984 fused_ordering(585) 00:12:27.984 fused_ordering(586) 00:12:27.984 fused_ordering(587) 00:12:27.984 fused_ordering(588) 00:12:27.984 fused_ordering(589) 00:12:27.984 fused_ordering(590) 00:12:27.984 fused_ordering(591) 00:12:27.984 fused_ordering(592) 00:12:27.984 fused_ordering(593) 00:12:27.984 fused_ordering(594) 00:12:27.984 fused_ordering(595) 00:12:27.984 fused_ordering(596) 00:12:27.984 fused_ordering(597) 00:12:27.984 fused_ordering(598) 00:12:27.984 fused_ordering(599) 00:12:27.984 fused_ordering(600) 00:12:27.984 fused_ordering(601) 00:12:27.984 fused_ordering(602) 00:12:27.984 fused_ordering(603) 00:12:27.984 fused_ordering(604) 00:12:27.984 fused_ordering(605) 00:12:27.984 fused_ordering(606) 00:12:27.984 fused_ordering(607) 00:12:27.984 fused_ordering(608) 00:12:27.984 fused_ordering(609) 00:12:27.984 fused_ordering(610) 00:12:27.984 fused_ordering(611) 00:12:27.984 fused_ordering(612) 00:12:27.984 fused_ordering(613) 00:12:27.984 fused_ordering(614) 00:12:27.984 fused_ordering(615) 00:12:28.244 fused_ordering(616) 00:12:28.244 fused_ordering(617) 00:12:28.244 fused_ordering(618) 00:12:28.244 fused_ordering(619) 00:12:28.244 fused_ordering(620) 00:12:28.244 fused_ordering(621) 00:12:28.244 fused_ordering(622) 00:12:28.244 fused_ordering(623) 00:12:28.244 fused_ordering(624) 00:12:28.244 fused_ordering(625) 00:12:28.244 fused_ordering(626) 00:12:28.244 fused_ordering(627) 00:12:28.244 fused_ordering(628) 00:12:28.244 fused_ordering(629) 00:12:28.244 fused_ordering(630) 00:12:28.244 fused_ordering(631) 00:12:28.244 fused_ordering(632) 00:12:28.244 fused_ordering(633) 00:12:28.244 fused_ordering(634) 00:12:28.244 fused_ordering(635) 00:12:28.244 fused_ordering(636) 00:12:28.244 fused_ordering(637) 00:12:28.244 fused_ordering(638) 00:12:28.244 fused_ordering(639) 00:12:28.244 fused_ordering(640) 00:12:28.244 fused_ordering(641) 00:12:28.244 fused_ordering(642) 00:12:28.244 fused_ordering(643) 00:12:28.244 fused_ordering(644) 00:12:28.244 fused_ordering(645) 00:12:28.244 fused_ordering(646) 00:12:28.244 fused_ordering(647) 00:12:28.244 fused_ordering(648) 00:12:28.244 fused_ordering(649) 00:12:28.244 fused_ordering(650) 00:12:28.244 fused_ordering(651) 00:12:28.244 fused_ordering(652) 00:12:28.244 fused_ordering(653) 00:12:28.244 fused_ordering(654) 00:12:28.244 fused_ordering(655) 00:12:28.244 fused_ordering(656) 00:12:28.244 fused_ordering(657) 00:12:28.244 fused_ordering(658) 00:12:28.244 fused_ordering(659) 00:12:28.244 fused_ordering(660) 00:12:28.244 fused_ordering(661) 00:12:28.244 fused_ordering(662) 00:12:28.244 fused_ordering(663) 00:12:28.244 fused_ordering(664) 00:12:28.244 fused_ordering(665) 00:12:28.244 fused_ordering(666) 00:12:28.244 fused_ordering(667) 00:12:28.244 fused_ordering(668) 00:12:28.244 fused_ordering(669) 00:12:28.244 fused_ordering(670) 00:12:28.244 fused_ordering(671) 00:12:28.244 fused_ordering(672) 00:12:28.244 fused_ordering(673) 00:12:28.244 fused_ordering(674) 00:12:28.244 fused_ordering(675) 00:12:28.244 fused_ordering(676) 00:12:28.244 fused_ordering(677) 00:12:28.244 fused_ordering(678) 00:12:28.244 fused_ordering(679) 00:12:28.244 fused_ordering(680) 00:12:28.244 fused_ordering(681) 00:12:28.244 fused_ordering(682) 00:12:28.244 fused_ordering(683) 00:12:28.244 fused_ordering(684) 00:12:28.244 fused_ordering(685) 00:12:28.244 fused_ordering(686) 00:12:28.244 fused_ordering(687) 00:12:28.244 fused_ordering(688) 00:12:28.244 fused_ordering(689) 00:12:28.244 fused_ordering(690) 00:12:28.244 fused_ordering(691) 00:12:28.244 fused_ordering(692) 00:12:28.244 fused_ordering(693) 00:12:28.245 fused_ordering(694) 00:12:28.245 fused_ordering(695) 00:12:28.245 fused_ordering(696) 00:12:28.245 fused_ordering(697) 00:12:28.245 fused_ordering(698) 00:12:28.245 fused_ordering(699) 00:12:28.245 fused_ordering(700) 00:12:28.245 fused_ordering(701) 00:12:28.245 fused_ordering(702) 00:12:28.245 fused_ordering(703) 00:12:28.245 fused_ordering(704) 00:12:28.245 fused_ordering(705) 00:12:28.245 fused_ordering(706) 00:12:28.245 fused_ordering(707) 00:12:28.245 fused_ordering(708) 00:12:28.245 fused_ordering(709) 00:12:28.245 fused_ordering(710) 00:12:28.245 fused_ordering(711) 00:12:28.245 fused_ordering(712) 00:12:28.245 fused_ordering(713) 00:12:28.245 fused_ordering(714) 00:12:28.245 fused_ordering(715) 00:12:28.245 fused_ordering(716) 00:12:28.245 fused_ordering(717) 00:12:28.245 fused_ordering(718) 00:12:28.245 fused_ordering(719) 00:12:28.245 fused_ordering(720) 00:12:28.245 fused_ordering(721) 00:12:28.245 fused_ordering(722) 00:12:28.245 fused_ordering(723) 00:12:28.245 fused_ordering(724) 00:12:28.245 fused_ordering(725) 00:12:28.245 fused_ordering(726) 00:12:28.245 fused_ordering(727) 00:12:28.245 fused_ordering(728) 00:12:28.245 fused_ordering(729) 00:12:28.245 fused_ordering(730) 00:12:28.245 fused_ordering(731) 00:12:28.245 fused_ordering(732) 00:12:28.245 fused_ordering(733) 00:12:28.245 fused_ordering(734) 00:12:28.245 fused_ordering(735) 00:12:28.245 fused_ordering(736) 00:12:28.245 fused_ordering(737) 00:12:28.245 fused_ordering(738) 00:12:28.245 fused_ordering(739) 00:12:28.245 fused_ordering(740) 00:12:28.245 fused_ordering(741) 00:12:28.245 fused_ordering(742) 00:12:28.245 fused_ordering(743) 00:12:28.245 fused_ordering(744) 00:12:28.245 fused_ordering(745) 00:12:28.245 fused_ordering(746) 00:12:28.245 fused_ordering(747) 00:12:28.245 fused_ordering(748) 00:12:28.245 fused_ordering(749) 00:12:28.245 fused_ordering(750) 00:12:28.245 fused_ordering(751) 00:12:28.245 fused_ordering(752) 00:12:28.245 fused_ordering(753) 00:12:28.245 fused_ordering(754) 00:12:28.245 fused_ordering(755) 00:12:28.245 fused_ordering(756) 00:12:28.245 fused_ordering(757) 00:12:28.245 fused_ordering(758) 00:12:28.245 fused_ordering(759) 00:12:28.245 fused_ordering(760) 00:12:28.245 fused_ordering(761) 00:12:28.245 fused_ordering(762) 00:12:28.245 fused_ordering(763) 00:12:28.245 fused_ordering(764) 00:12:28.245 fused_ordering(765) 00:12:28.245 fused_ordering(766) 00:12:28.245 fused_ordering(767) 00:12:28.245 fused_ordering(768) 00:12:28.245 fused_ordering(769) 00:12:28.245 fused_ordering(770) 00:12:28.245 fused_ordering(771) 00:12:28.245 fused_ordering(772) 00:12:28.245 fused_ordering(773) 00:12:28.245 fused_ordering(774) 00:12:28.245 fused_ordering(775) 00:12:28.245 fused_ordering(776) 00:12:28.245 fused_ordering(777) 00:12:28.245 fused_ordering(778) 00:12:28.245 fused_ordering(779) 00:12:28.245 fused_ordering(780) 00:12:28.245 fused_ordering(781) 00:12:28.245 fused_ordering(782) 00:12:28.245 fused_ordering(783) 00:12:28.245 fused_ordering(784) 00:12:28.245 fused_ordering(785) 00:12:28.245 fused_ordering(786) 00:12:28.245 fused_ordering(787) 00:12:28.245 fused_ordering(788) 00:12:28.245 fused_ordering(789) 00:12:28.245 fused_ordering(790) 00:12:28.245 fused_ordering(791) 00:12:28.245 fused_ordering(792) 00:12:28.245 fused_ordering(793) 00:12:28.245 fused_ordering(794) 00:12:28.245 fused_ordering(795) 00:12:28.245 fused_ordering(796) 00:12:28.245 fused_ordering(797) 00:12:28.245 fused_ordering(798) 00:12:28.245 fused_ordering(799) 00:12:28.245 fused_ordering(800) 00:12:28.245 fused_ordering(801) 00:12:28.245 fused_ordering(802) 00:12:28.245 fused_ordering(803) 00:12:28.245 fused_ordering(804) 00:12:28.245 fused_ordering(805) 00:12:28.245 fused_ordering(806) 00:12:28.245 fused_ordering(807) 00:12:28.245 fused_ordering(808) 00:12:28.245 fused_ordering(809) 00:12:28.245 fused_ordering(810) 00:12:28.245 fused_ordering(811) 00:12:28.245 fused_ordering(812) 00:12:28.245 fused_ordering(813) 00:12:28.245 fused_ordering(814) 00:12:28.245 fused_ordering(815) 00:12:28.245 fused_ordering(816) 00:12:28.245 fused_ordering(817) 00:12:28.245 fused_ordering(818) 00:12:28.245 fused_ordering(819) 00:12:28.245 fused_ordering(820) 00:12:28.814 fused_ordering(821) 00:12:28.814 fused_ordering(822) 00:12:28.814 fused_ordering(823) 00:12:28.814 fused_ordering(824) 00:12:28.814 fused_ordering(825) 00:12:28.814 fused_ordering(826) 00:12:28.814 fused_ordering(827) 00:12:28.814 fused_ordering(828) 00:12:28.814 fused_ordering(829) 00:12:28.814 fused_ordering(830) 00:12:28.814 fused_ordering(831) 00:12:28.814 fused_ordering(832) 00:12:28.814 fused_ordering(833) 00:12:28.814 fused_ordering(834) 00:12:28.814 fused_ordering(835) 00:12:28.814 fused_ordering(836) 00:12:28.814 fused_ordering(837) 00:12:28.814 fused_ordering(838) 00:12:28.814 fused_ordering(839) 00:12:28.814 fused_ordering(840) 00:12:28.814 fused_ordering(841) 00:12:28.814 fused_ordering(842) 00:12:28.814 fused_ordering(843) 00:12:28.814 fused_ordering(844) 00:12:28.814 fused_ordering(845) 00:12:28.814 fused_ordering(846) 00:12:28.814 fused_ordering(847) 00:12:28.814 fused_ordering(848) 00:12:28.814 fused_ordering(849) 00:12:28.814 fused_ordering(850) 00:12:28.814 fused_ordering(851) 00:12:28.814 fused_ordering(852) 00:12:28.814 fused_ordering(853) 00:12:28.814 fused_ordering(854) 00:12:28.814 fused_ordering(855) 00:12:28.814 fused_ordering(856) 00:12:28.814 fused_ordering(857) 00:12:28.814 fused_ordering(858) 00:12:28.814 fused_ordering(859) 00:12:28.814 fused_ordering(860) 00:12:28.814 fused_ordering(861) 00:12:28.814 fused_ordering(862) 00:12:28.814 fused_ordering(863) 00:12:28.814 fused_ordering(864) 00:12:28.814 fused_ordering(865) 00:12:28.814 fused_ordering(866) 00:12:28.814 fused_ordering(867) 00:12:28.814 fused_ordering(868) 00:12:28.814 fused_ordering(869) 00:12:28.814 fused_ordering(870) 00:12:28.814 fused_ordering(871) 00:12:28.814 fused_ordering(872) 00:12:28.814 fused_ordering(873) 00:12:28.814 fused_ordering(874) 00:12:28.814 fused_ordering(875) 00:12:28.814 fused_ordering(876) 00:12:28.814 fused_ordering(877) 00:12:28.814 fused_ordering(878) 00:12:28.814 fused_ordering(879) 00:12:28.814 fused_ordering(880) 00:12:28.814 fused_ordering(881) 00:12:28.814 fused_ordering(882) 00:12:28.814 fused_ordering(883) 00:12:28.814 fused_ordering(884) 00:12:28.814 fused_ordering(885) 00:12:28.814 fused_ordering(886) 00:12:28.814 fused_ordering(887) 00:12:28.814 fused_ordering(888) 00:12:28.814 fused_ordering(889) 00:12:28.814 fused_ordering(890) 00:12:28.814 fused_ordering(891) 00:12:28.814 fused_ordering(892) 00:12:28.814 fused_ordering(893) 00:12:28.814 fused_ordering(894) 00:12:28.814 fused_ordering(895) 00:12:28.814 fused_ordering(896) 00:12:28.814 fused_ordering(897) 00:12:28.814 fused_ordering(898) 00:12:28.814 fused_ordering(899) 00:12:28.814 fused_ordering(900) 00:12:28.814 fused_ordering(901) 00:12:28.814 fused_ordering(902) 00:12:28.814 fused_ordering(903) 00:12:28.814 fused_ordering(904) 00:12:28.814 fused_ordering(905) 00:12:28.814 fused_ordering(906) 00:12:28.814 fused_ordering(907) 00:12:28.814 fused_ordering(908) 00:12:28.814 fused_ordering(909) 00:12:28.814 fused_ordering(910) 00:12:28.814 fused_ordering(911) 00:12:28.814 fused_ordering(912) 00:12:28.814 fused_ordering(913) 00:12:28.814 fused_ordering(914) 00:12:28.814 fused_ordering(915) 00:12:28.814 fused_ordering(916) 00:12:28.814 fused_ordering(917) 00:12:28.814 fused_ordering(918) 00:12:28.814 fused_ordering(919) 00:12:28.814 fused_ordering(920) 00:12:28.814 fused_ordering(921) 00:12:28.814 fused_ordering(922) 00:12:28.814 fused_ordering(923) 00:12:28.814 fused_ordering(924) 00:12:28.814 fused_ordering(925) 00:12:28.814 fused_ordering(926) 00:12:28.814 fused_ordering(927) 00:12:28.814 fused_ordering(928) 00:12:28.814 fused_ordering(929) 00:12:28.814 fused_ordering(930) 00:12:28.814 fused_ordering(931) 00:12:28.814 fused_ordering(932) 00:12:28.814 fused_ordering(933) 00:12:28.814 fused_ordering(934) 00:12:28.814 fused_ordering(935) 00:12:28.814 fused_ordering(936) 00:12:28.814 fused_ordering(937) 00:12:28.814 fused_ordering(938) 00:12:28.814 fused_ordering(939) 00:12:28.814 fused_ordering(940) 00:12:28.814 fused_ordering(941) 00:12:28.814 fused_ordering(942) 00:12:28.814 fused_ordering(943) 00:12:28.814 fused_ordering(944) 00:12:28.814 fused_ordering(945) 00:12:28.814 fused_ordering(946) 00:12:28.814 fused_ordering(947) 00:12:28.814 fused_ordering(948) 00:12:28.814 fused_ordering(949) 00:12:28.814 fused_ordering(950) 00:12:28.815 fused_ordering(951) 00:12:28.815 fused_ordering(952) 00:12:28.815 fused_ordering(953) 00:12:28.815 fused_ordering(954) 00:12:28.815 fused_ordering(955) 00:12:28.815 fused_ordering(956) 00:12:28.815 fused_ordering(957) 00:12:28.815 fused_ordering(958) 00:12:28.815 fused_ordering(959) 00:12:28.815 fused_ordering(960) 00:12:28.815 fused_ordering(961) 00:12:28.815 fused_ordering(962) 00:12:28.815 fused_ordering(963) 00:12:28.815 fused_ordering(964) 00:12:28.815 fused_ordering(965) 00:12:28.815 fused_ordering(966) 00:12:28.815 fused_ordering(967) 00:12:28.815 fused_ordering(968) 00:12:28.815 fused_ordering(969) 00:12:28.815 fused_ordering(970) 00:12:28.815 fused_ordering(971) 00:12:28.815 fused_ordering(972) 00:12:28.815 fused_ordering(973) 00:12:28.815 fused_ordering(974) 00:12:28.815 fused_ordering(975) 00:12:28.815 fused_ordering(976) 00:12:28.815 fused_ordering(977) 00:12:28.815 fused_ordering(978) 00:12:28.815 fused_ordering(979) 00:12:28.815 fused_ordering(980) 00:12:28.815 fused_ordering(981) 00:12:28.815 fused_ordering(982) 00:12:28.815 fused_ordering(983) 00:12:28.815 fused_ordering(984) 00:12:28.815 fused_ordering(985) 00:12:28.815 fused_ordering(986) 00:12:28.815 fused_ordering(987) 00:12:28.815 fused_ordering(988) 00:12:28.815 fused_ordering(989) 00:12:28.815 fused_ordering(990) 00:12:28.815 fused_ordering(991) 00:12:28.815 fused_ordering(992) 00:12:28.815 fused_ordering(993) 00:12:28.815 fused_ordering(994) 00:12:28.815 fused_ordering(995) 00:12:28.815 fused_ordering(996) 00:12:28.815 fused_ordering(997) 00:12:28.815 fused_ordering(998) 00:12:28.815 fused_ordering(999) 00:12:28.815 fused_ordering(1000) 00:12:28.815 fused_ordering(1001) 00:12:28.815 fused_ordering(1002) 00:12:28.815 fused_ordering(1003) 00:12:28.815 fused_ordering(1004) 00:12:28.815 fused_ordering(1005) 00:12:28.815 fused_ordering(1006) 00:12:28.815 fused_ordering(1007) 00:12:28.815 fused_ordering(1008) 00:12:28.815 fused_ordering(1009) 00:12:28.815 fused_ordering(1010) 00:12:28.815 fused_ordering(1011) 00:12:28.815 fused_ordering(1012) 00:12:28.815 fused_ordering(1013) 00:12:28.815 fused_ordering(1014) 00:12:28.815 fused_ordering(1015) 00:12:28.815 fused_ordering(1016) 00:12:28.815 fused_ordering(1017) 00:12:28.815 fused_ordering(1018) 00:12:28.815 fused_ordering(1019) 00:12:28.815 fused_ordering(1020) 00:12:28.815 fused_ordering(1021) 00:12:28.815 fused_ordering(1022) 00:12:28.815 fused_ordering(1023) 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.815 rmmod nvme_tcp 00:12:28.815 rmmod nvme_fabrics 00:12:28.815 rmmod nvme_keyring 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3405364 ']' 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3405364 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3405364 ']' 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3405364 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.815 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3405364 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3405364' 00:12:29.075 killing process with pid 3405364 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3405364 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3405364 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.075 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.613 00:12:31.613 real 0m10.698s 00:12:31.613 user 0m5.042s 00:12:31.613 sys 0m5.797s 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.613 ************************************ 00:12:31.613 END TEST nvmf_fused_ordering 00:12:31.613 ************************************ 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.613 ************************************ 00:12:31.613 START TEST nvmf_ns_masking 00:12:31.613 ************************************ 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.613 * Looking for test storage... 00:12:31.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.613 --rc genhtml_branch_coverage=1 00:12:31.613 --rc genhtml_function_coverage=1 00:12:31.613 --rc genhtml_legend=1 00:12:31.613 --rc geninfo_all_blocks=1 00:12:31.613 --rc geninfo_unexecuted_blocks=1 00:12:31.613 00:12:31.613 ' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.613 --rc genhtml_branch_coverage=1 00:12:31.613 --rc genhtml_function_coverage=1 00:12:31.613 --rc genhtml_legend=1 00:12:31.613 --rc geninfo_all_blocks=1 00:12:31.613 --rc geninfo_unexecuted_blocks=1 00:12:31.613 00:12:31.613 ' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.613 --rc genhtml_branch_coverage=1 00:12:31.613 --rc genhtml_function_coverage=1 00:12:31.613 --rc genhtml_legend=1 00:12:31.613 --rc geninfo_all_blocks=1 00:12:31.613 --rc geninfo_unexecuted_blocks=1 00:12:31.613 00:12:31.613 ' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.613 --rc genhtml_branch_coverage=1 00:12:31.613 --rc genhtml_function_coverage=1 00:12:31.613 --rc genhtml_legend=1 00:12:31.613 --rc geninfo_all_blocks=1 00:12:31.613 --rc geninfo_unexecuted_blocks=1 00:12:31.613 00:12:31.613 ' 00:12:31.613 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7472365a-0098-4a07-8853-4bb485f9e5b2 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=44005726-7150-4cac-862a-82344865304d 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=464980f8-9ede-4494-8a61-25278915efbc 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.614 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:38.186 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:38.186 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.186 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:38.187 Found net devices under 0000:86:00.0: cvl_0_0 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:38.187 Found net devices under 0000:86:00.1: cvl_0_1 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:12:38.187 00:12:38.187 --- 10.0.0.2 ping statistics --- 00:12:38.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.187 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:12:38.187 00:12:38.187 --- 10.0.0.1 ping statistics --- 00:12:38.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.187 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3409172 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3409172 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3409172 ']' 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.187 [2024-11-19 17:30:39.577106] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:12:38.187 [2024-11-19 17:30:39.577160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.187 [2024-11-19 17:30:39.657159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.187 [2024-11-19 17:30:39.698707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.187 [2024-11-19 17:30:39.698745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.187 [2024-11-19 17:30:39.698753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.187 [2024-11-19 17:30:39.698759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.187 [2024-11-19 17:30:39.698764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.187 [2024-11-19 17:30:39.699342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.187 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.188 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:38.188 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.188 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.188 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.188 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.188 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:38.188 [2024-11-19 17:30:40.012090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.188 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:38.188 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:38.188 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:38.188 Malloc1 00:12:38.188 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:38.447 Malloc2 00:12:38.447 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.705 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:38.705 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.965 [2024-11-19 17:30:41.045692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.965 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:38.965 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 464980f8-9ede-4494-8a61-25278915efbc -a 10.0.0.2 -s 4420 -i 4 00:12:39.225 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.225 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:39.225 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.225 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:39.225 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:41.129 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:41.388 [ 0]:0x1 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba1f19a68d81415391bcb712ec6a0d84 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba1f19a68d81415391bcb712ec6a0d84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:41.388 [ 0]:0x1 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.388 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba1f19a68d81415391bcb712ec6a0d84 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba1f19a68d81415391bcb712ec6a0d84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:41.647 [ 1]:0x2 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:41.647 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.906 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.165 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 464980f8-9ede-4494-8a61-25278915efbc -a 10.0.0.2 -s 4420 -i 4 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:42.424 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.328 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.328 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.328 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.328 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.328 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.328 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.587 [ 0]:0x2 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.587 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.845 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:44.845 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.845 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.845 [ 0]:0x1 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba1f19a68d81415391bcb712ec6a0d84 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba1f19a68d81415391bcb712ec6a0d84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.845 [ 1]:0x2 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.845 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:45.104 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:45.364 [ 0]:0x2 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.364 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 464980f8-9ede-4494-8a61-25278915efbc -a 10.0.0.2 -s 4420 -i 4 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:45.623 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.157 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.157 [ 0]:0x1 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba1f19a68d81415391bcb712ec6a0d84 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba1f19a68d81415391bcb712ec6a0d84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.157 [ 1]:0x2 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.157 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:48.416 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.417 [ 0]:0x2 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:48.417 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.676 [2024-11-19 17:30:50.665165] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:48.676 request: 00:12:48.676 { 00:12:48.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.676 "nsid": 2, 00:12:48.676 "host": "nqn.2016-06.io.spdk:host1", 00:12:48.676 "method": "nvmf_ns_remove_host", 00:12:48.676 "req_id": 1 00:12:48.676 } 00:12:48.676 Got JSON-RPC error response 00:12:48.676 response: 00:12:48.676 { 00:12:48.676 "code": -32602, 00:12:48.676 "message": "Invalid parameters" 00:12:48.676 } 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.676 [ 0]:0x2 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdbe52d1ee374bc4af0ec57d54c30293 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdbe52d1ee374bc4af0ec57d54c30293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3411169 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3411169 /var/tmp/host.sock 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3411169 ']' 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:48.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.676 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:48.935 [2024-11-19 17:30:50.899528] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:12:48.935 [2024-11-19 17:30:50.899575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411169 ] 00:12:48.935 [2024-11-19 17:30:50.974464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.935 [2024-11-19 17:30:51.015333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.194 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.194 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:49.194 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.453 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.453 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7472365a-0098-4a07-8853-4bb485f9e5b2 00:12:49.453 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:49.453 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7472365A00984A0788534BB485F9E5B2 -i 00:12:49.712 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 44005726-7150-4cac-862a-82344865304d 00:12:49.712 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:49.712 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4400572671504CAC862A82344865304D -i 00:12:49.970 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:50.228 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:50.228 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:50.228 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:50.795 nvme0n1 00:12:50.795 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:50.795 17:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:51.053 nvme1n2 00:12:51.053 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:51.053 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:51.053 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:51.053 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:51.053 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:51.312 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:51.312 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:51.312 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:51.312 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:51.571 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7472365a-0098-4a07-8853-4bb485f9e5b2 == \7\4\7\2\3\6\5\a\-\0\0\9\8\-\4\a\0\7\-\8\8\5\3\-\4\b\b\4\8\5\f\9\e\5\b\2 ]] 00:12:51.571 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:51.571 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:51.571 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:51.571 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 44005726-7150-4cac-862a-82344865304d == \4\4\0\0\5\7\2\6\-\7\1\5\0\-\4\c\a\c\-\8\6\2\a\-\8\2\3\4\4\8\6\5\3\0\4\d ]] 00:12:51.571 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.830 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7472365a-0098-4a07-8853-4bb485f9e5b2 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7472365A00984A0788534BB485F9E5B2 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7472365A00984A0788534BB485F9E5B2 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:52.089 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7472365A00984A0788534BB485F9E5B2 00:12:52.348 [2024-11-19 17:30:54.355326] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:52.348 [2024-11-19 17:30:54.355358] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:52.348 [2024-11-19 17:30:54.355366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.348 request: 00:12:52.348 { 00:12:52.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.348 "namespace": { 00:12:52.348 "bdev_name": "invalid", 00:12:52.348 "nsid": 1, 00:12:52.348 "nguid": "7472365A00984A0788534BB485F9E5B2", 00:12:52.348 "no_auto_visible": false 00:12:52.348 }, 00:12:52.348 "method": "nvmf_subsystem_add_ns", 00:12:52.348 "req_id": 1 00:12:52.348 } 00:12:52.348 Got JSON-RPC error response 00:12:52.348 response: 00:12:52.348 { 00:12:52.348 "code": -32602, 00:12:52.348 "message": "Invalid parameters" 00:12:52.348 } 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7472365a-0098-4a07-8853-4bb485f9e5b2 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:52.348 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7472365A00984A0788534BB485F9E5B2 -i 00:12:52.607 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:54.512 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:54.512 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:54.512 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3411169 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3411169 ']' 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3411169 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3411169 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3411169' 00:12:54.771 killing process with pid 3411169 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3411169 00:12:54.771 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3411169 00:12:55.031 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.290 rmmod nvme_tcp 00:12:55.290 rmmod nvme_fabrics 00:12:55.290 rmmod nvme_keyring 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3409172 ']' 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3409172 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3409172 ']' 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3409172 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3409172 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3409172' 00:12:55.290 killing process with pid 3409172 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3409172 00:12:55.290 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3409172 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.549 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.570 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.829 00:12:57.829 real 0m26.414s 00:12:57.829 user 0m31.757s 00:12:57.829 sys 0m7.095s 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.829 ************************************ 00:12:57.829 END TEST nvmf_ns_masking 00:12:57.829 ************************************ 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.829 ************************************ 00:12:57.829 START TEST nvmf_nvme_cli 00:12:57.829 ************************************ 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:57.829 * Looking for test storage... 00:12:57.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.829 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:57.830 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:57.830 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.830 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.830 17:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:57.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.830 --rc genhtml_branch_coverage=1 00:12:57.830 --rc genhtml_function_coverage=1 00:12:57.830 --rc genhtml_legend=1 00:12:57.830 --rc geninfo_all_blocks=1 00:12:57.830 --rc geninfo_unexecuted_blocks=1 00:12:57.830 00:12:57.830 ' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:57.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.830 --rc genhtml_branch_coverage=1 00:12:57.830 --rc genhtml_function_coverage=1 00:12:57.830 --rc genhtml_legend=1 00:12:57.830 --rc geninfo_all_blocks=1 00:12:57.830 --rc geninfo_unexecuted_blocks=1 00:12:57.830 00:12:57.830 ' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:57.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.830 --rc genhtml_branch_coverage=1 00:12:57.830 --rc genhtml_function_coverage=1 00:12:57.830 --rc genhtml_legend=1 00:12:57.830 --rc geninfo_all_blocks=1 00:12:57.830 --rc geninfo_unexecuted_blocks=1 00:12:57.830 00:12:57.830 ' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:57.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.830 --rc genhtml_branch_coverage=1 00:12:57.830 --rc genhtml_function_coverage=1 00:12:57.830 --rc genhtml_legend=1 00:12:57.830 --rc geninfo_all_blocks=1 00:12:57.830 --rc geninfo_unexecuted_blocks=1 00:12:57.830 00:12:57.830 ' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:57.830 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.831 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.089 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.089 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.090 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.090 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.663 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.663 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.663 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.664 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.664 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:13:04.664 00:13:04.664 --- 10.0.0.2 ping statistics --- 00:13:04.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.664 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:13:04.664 00:13:04.664 --- 10.0.0.1 ping statistics --- 00:13:04.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.664 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3415891 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3415891 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3415891 ']' 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.664 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.664 [2024-11-19 17:31:06.031604] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:13:04.664 [2024-11-19 17:31:06.031651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.664 [2024-11-19 17:31:06.111914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.664 [2024-11-19 17:31:06.155829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.664 [2024-11-19 17:31:06.155869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.664 [2024-11-19 17:31:06.155876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.664 [2024-11-19 17:31:06.155881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.664 [2024-11-19 17:31:06.155886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.664 [2024-11-19 17:31:06.157468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.664 [2024-11-19 17:31:06.157580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.664 [2024-11-19 17:31:06.157689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.664 [2024-11-19 17:31:06.157690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.664 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.664 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:04.664 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.664 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.664 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 [2024-11-19 17:31:06.921355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 Malloc0 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 Malloc1 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 [2024-11-19 17:31:07.015656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.924 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:05.183 00:13:05.183 Discovery Log Number of Records 2, Generation counter 2 00:13:05.183 =====Discovery Log Entry 0====== 00:13:05.183 trtype: tcp 00:13:05.183 adrfam: ipv4 00:13:05.183 subtype: current discovery subsystem 00:13:05.183 treq: not required 00:13:05.183 portid: 0 00:13:05.183 trsvcid: 4420 00:13:05.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:05.183 traddr: 10.0.0.2 00:13:05.183 eflags: explicit discovery connections, duplicate discovery information 00:13:05.183 sectype: none 00:13:05.183 =====Discovery Log Entry 1====== 00:13:05.183 trtype: tcp 00:13:05.183 adrfam: ipv4 00:13:05.183 subtype: nvme subsystem 00:13:05.183 treq: not required 00:13:05.183 portid: 0 00:13:05.183 trsvcid: 4420 00:13:05.183 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:05.183 traddr: 10.0.0.2 00:13:05.183 eflags: none 00:13:05.183 sectype: none 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:05.183 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.559 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:06.559 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.559 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.559 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:06.559 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:06.559 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:08.464 /dev/nvme0n2 ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.464 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.465 rmmod nvme_tcp 00:13:08.465 rmmod nvme_fabrics 00:13:08.465 rmmod nvme_keyring 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3415891 ']' 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3415891 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3415891 ']' 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3415891 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.465 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3415891 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3415891' 00:13:08.725 killing process with pid 3415891 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3415891 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3415891 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.725 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.261 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.261 00:13:11.261 real 0m13.158s 00:13:11.261 user 0m20.804s 00:13:11.261 sys 0m5.160s 00:13:11.261 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.261 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:11.261 ************************************ 00:13:11.261 END TEST nvmf_nvme_cli 00:13:11.261 ************************************ 00:13:11.261 17:31:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:11.261 17:31:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:11.261 17:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.261 17:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.262 ************************************ 00:13:11.262 START TEST nvmf_vfio_user 00:13:11.262 ************************************ 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:11.262 * Looking for test storage... 00:13:11.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:11.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.262 --rc genhtml_branch_coverage=1 00:13:11.262 --rc genhtml_function_coverage=1 00:13:11.262 --rc genhtml_legend=1 00:13:11.262 --rc geninfo_all_blocks=1 00:13:11.262 --rc geninfo_unexecuted_blocks=1 00:13:11.262 00:13:11.262 ' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:11.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.262 --rc genhtml_branch_coverage=1 00:13:11.262 --rc genhtml_function_coverage=1 00:13:11.262 --rc genhtml_legend=1 00:13:11.262 --rc geninfo_all_blocks=1 00:13:11.262 --rc geninfo_unexecuted_blocks=1 00:13:11.262 00:13:11.262 ' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:11.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.262 --rc genhtml_branch_coverage=1 00:13:11.262 --rc genhtml_function_coverage=1 00:13:11.262 --rc genhtml_legend=1 00:13:11.262 --rc geninfo_all_blocks=1 00:13:11.262 --rc geninfo_unexecuted_blocks=1 00:13:11.262 00:13:11.262 ' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:11.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.262 --rc genhtml_branch_coverage=1 00:13:11.262 --rc genhtml_function_coverage=1 00:13:11.262 --rc genhtml_legend=1 00:13:11.262 --rc geninfo_all_blocks=1 00:13:11.262 --rc geninfo_unexecuted_blocks=1 00:13:11.262 00:13:11.262 ' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.262 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3417185 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3417185' 00:13:11.263 Process pid: 3417185 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3417185 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3417185 ']' 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.263 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 [2024-11-19 17:31:13.336059] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:13:11.263 [2024-11-19 17:31:13.336105] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.263 [2024-11-19 17:31:13.410425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.263 [2024-11-19 17:31:13.450567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.263 [2024-11-19 17:31:13.450606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.263 [2024-11-19 17:31:13.450613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.263 [2024-11-19 17:31:13.450619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.263 [2024-11-19 17:31:13.450624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.263 [2024-11-19 17:31:13.452131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.263 [2024-11-19 17:31:13.452239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.263 [2024-11-19 17:31:13.452325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.263 [2024-11-19 17:31:13.452327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.521 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.521 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:11.521 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:12.458 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:12.717 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:12.717 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:12.717 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.717 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:12.717 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:12.976 Malloc1 00:13:12.976 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:13.235 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:13.235 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:13.494 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.494 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:13.494 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:13.753 Malloc2 00:13:13.753 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:14.012 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:14.012 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:14.271 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:14.271 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:14.271 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:14.271 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:14.271 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:14.271 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:14.271 [2024-11-19 17:31:16.458103] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:13:14.271 [2024-11-19 17:31:16.458149] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417710 ] 00:13:14.532 [2024-11-19 17:31:16.500403] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:14.532 [2024-11-19 17:31:16.505714] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.532 [2024-11-19 17:31:16.505735] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7177de0000 00:13:14.532 [2024-11-19 17:31:16.506718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.507713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.508720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.509724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.510733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.511740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.512746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.513753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.532 [2024-11-19 17:31:16.514760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.532 [2024-11-19 17:31:16.514769] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7177dd5000 00:13:14.532 [2024-11-19 17:31:16.515711] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.532 [2024-11-19 17:31:16.525320] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:14.532 [2024-11-19 17:31:16.525346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:14.532 [2024-11-19 17:31:16.530847] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:14.532 [2024-11-19 17:31:16.530884] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:14.532 [2024-11-19 17:31:16.530957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:14.532 [2024-11-19 17:31:16.530973] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:14.532 [2024-11-19 17:31:16.530978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:14.532 [2024-11-19 17:31:16.531842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:14.532 [2024-11-19 17:31:16.531851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:14.532 [2024-11-19 17:31:16.531857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:14.532 [2024-11-19 17:31:16.532853] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:14.532 [2024-11-19 17:31:16.532861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:14.532 [2024-11-19 17:31:16.532868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:14.532 [2024-11-19 17:31:16.533857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:14.532 [2024-11-19 17:31:16.533866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:14.532 [2024-11-19 17:31:16.534864] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:14.532 [2024-11-19 17:31:16.534873] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:14.532 [2024-11-19 17:31:16.534877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:14.532 [2024-11-19 17:31:16.534883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:14.532 [2024-11-19 17:31:16.534991] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:14.532 [2024-11-19 17:31:16.534996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:14.532 [2024-11-19 17:31:16.535001] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:14.532 [2024-11-19 17:31:16.535875] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:14.532 [2024-11-19 17:31:16.536877] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:14.532 [2024-11-19 17:31:16.537884] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:14.532 [2024-11-19 17:31:16.538879] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:14.532 [2024-11-19 17:31:16.538959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:14.532 [2024-11-19 17:31:16.539893] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:14.532 [2024-11-19 17:31:16.539901] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:14.532 [2024-11-19 17:31:16.539906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.539923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:14.532 [2024-11-19 17:31:16.539934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.539952] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.532 [2024-11-19 17:31:16.539958] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.532 [2024-11-19 17:31:16.539961] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.532 [2024-11-19 17:31:16.539974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.532 [2024-11-19 17:31:16.540026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:14.532 [2024-11-19 17:31:16.540035] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:14.532 [2024-11-19 17:31:16.540040] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:14.532 [2024-11-19 17:31:16.540044] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:14.532 [2024-11-19 17:31:16.540048] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:14.532 [2024-11-19 17:31:16.540054] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:14.532 [2024-11-19 17:31:16.540059] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:14.532 [2024-11-19 17:31:16.540063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.540072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.540081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:14.532 [2024-11-19 17:31:16.540094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:14.532 [2024-11-19 17:31:16.540105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.532 [2024-11-19 17:31:16.540113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.532 [2024-11-19 17:31:16.540120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.532 [2024-11-19 17:31:16.540130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.532 [2024-11-19 17:31:16.540134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.540141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.540149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:14.532 [2024-11-19 17:31:16.540162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:14.532 [2024-11-19 17:31:16.540169] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:14.532 [2024-11-19 17:31:16.540174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:14.532 [2024-11-19 17:31:16.540180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540268] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:14.533 [2024-11-19 17:31:16.540272] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:14.533 [2024-11-19 17:31:16.540275] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.533 [2024-11-19 17:31:16.540281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540304] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:14.533 [2024-11-19 17:31:16.540311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540325] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.533 [2024-11-19 17:31:16.540329] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.533 [2024-11-19 17:31:16.540332] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.533 [2024-11-19 17:31:16.540337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540379] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.533 [2024-11-19 17:31:16.540382] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.533 [2024-11-19 17:31:16.540385] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.533 [2024-11-19 17:31:16.540391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540447] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:14.533 [2024-11-19 17:31:16.540451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:14.533 [2024-11-19 17:31:16.540455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:14.533 [2024-11-19 17:31:16.540472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540550] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:14.533 [2024-11-19 17:31:16.540555] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:14.533 [2024-11-19 17:31:16.540558] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:14.533 [2024-11-19 17:31:16.540561] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:14.533 [2024-11-19 17:31:16.540564] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:14.533 [2024-11-19 17:31:16.540570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:14.533 [2024-11-19 17:31:16.540576] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:14.533 [2024-11-19 17:31:16.540580] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:14.533 [2024-11-19 17:31:16.540583] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.533 [2024-11-19 17:31:16.540589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540595] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:14.533 [2024-11-19 17:31:16.540599] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.533 [2024-11-19 17:31:16.540602] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.533 [2024-11-19 17:31:16.540607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540613] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:14.533 [2024-11-19 17:31:16.540617] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:14.533 [2024-11-19 17:31:16.540620] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.533 [2024-11-19 17:31:16.540625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:14.533 [2024-11-19 17:31:16.540631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:14.533 [2024-11-19 17:31:16.540658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:14.533 ===================================================== 00:13:14.534 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:14.534 ===================================================== 00:13:14.534 Controller Capabilities/Features 00:13:14.534 ================================ 00:13:14.534 Vendor ID: 4e58 00:13:14.534 Subsystem Vendor ID: 4e58 00:13:14.534 Serial Number: SPDK1 00:13:14.534 Model Number: SPDK bdev Controller 00:13:14.534 Firmware Version: 25.01 00:13:14.534 Recommended Arb Burst: 6 00:13:14.534 IEEE OUI Identifier: 8d 6b 50 00:13:14.534 Multi-path I/O 00:13:14.534 May have multiple subsystem ports: Yes 00:13:14.534 May have multiple controllers: Yes 00:13:14.534 Associated with SR-IOV VF: No 00:13:14.534 Max Data Transfer Size: 131072 00:13:14.534 Max Number of Namespaces: 32 00:13:14.534 Max Number of I/O Queues: 127 00:13:14.534 NVMe Specification Version (VS): 1.3 00:13:14.534 NVMe Specification Version (Identify): 1.3 00:13:14.534 Maximum Queue Entries: 256 00:13:14.534 Contiguous Queues Required: Yes 00:13:14.534 Arbitration Mechanisms Supported 00:13:14.534 Weighted Round Robin: Not Supported 00:13:14.534 Vendor Specific: Not Supported 00:13:14.534 Reset Timeout: 15000 ms 00:13:14.534 Doorbell Stride: 4 bytes 00:13:14.534 NVM Subsystem Reset: Not Supported 00:13:14.534 Command Sets Supported 00:13:14.534 NVM Command Set: Supported 00:13:14.534 Boot Partition: Not Supported 00:13:14.534 Memory Page Size Minimum: 4096 bytes 00:13:14.534 Memory Page Size Maximum: 4096 bytes 00:13:14.534 Persistent Memory Region: Not Supported 00:13:14.534 Optional Asynchronous Events Supported 00:13:14.534 Namespace Attribute Notices: Supported 00:13:14.534 Firmware Activation Notices: Not Supported 00:13:14.534 ANA Change Notices: Not Supported 00:13:14.534 PLE Aggregate Log Change Notices: Not Supported 00:13:14.534 LBA Status Info Alert Notices: Not Supported 00:13:14.534 EGE Aggregate Log Change Notices: Not Supported 00:13:14.534 Normal NVM Subsystem Shutdown event: Not Supported 00:13:14.534 Zone Descriptor Change Notices: Not Supported 00:13:14.534 Discovery Log Change Notices: Not Supported 00:13:14.534 Controller Attributes 00:13:14.534 128-bit Host Identifier: Supported 00:13:14.534 Non-Operational Permissive Mode: Not Supported 00:13:14.534 NVM Sets: Not Supported 00:13:14.534 Read Recovery Levels: Not Supported 00:13:14.534 Endurance Groups: Not Supported 00:13:14.534 Predictable Latency Mode: Not Supported 00:13:14.534 Traffic Based Keep ALive: Not Supported 00:13:14.534 Namespace Granularity: Not Supported 00:13:14.534 SQ Associations: Not Supported 00:13:14.534 UUID List: Not Supported 00:13:14.534 Multi-Domain Subsystem: Not Supported 00:13:14.534 Fixed Capacity Management: Not Supported 00:13:14.534 Variable Capacity Management: Not Supported 00:13:14.534 Delete Endurance Group: Not Supported 00:13:14.534 Delete NVM Set: Not Supported 00:13:14.534 Extended LBA Formats Supported: Not Supported 00:13:14.534 Flexible Data Placement Supported: Not Supported 00:13:14.534 00:13:14.534 Controller Memory Buffer Support 00:13:14.534 ================================ 00:13:14.534 Supported: No 00:13:14.534 00:13:14.534 Persistent Memory Region Support 00:13:14.534 ================================ 00:13:14.534 Supported: No 00:13:14.534 00:13:14.534 Admin Command Set Attributes 00:13:14.534 ============================ 00:13:14.534 Security Send/Receive: Not Supported 00:13:14.534 Format NVM: Not Supported 00:13:14.534 Firmware Activate/Download: Not Supported 00:13:14.534 Namespace Management: Not Supported 00:13:14.534 Device Self-Test: Not Supported 00:13:14.534 Directives: Not Supported 00:13:14.534 NVMe-MI: Not Supported 00:13:14.534 Virtualization Management: Not Supported 00:13:14.534 Doorbell Buffer Config: Not Supported 00:13:14.534 Get LBA Status Capability: Not Supported 00:13:14.534 Command & Feature Lockdown Capability: Not Supported 00:13:14.534 Abort Command Limit: 4 00:13:14.534 Async Event Request Limit: 4 00:13:14.534 Number of Firmware Slots: N/A 00:13:14.534 Firmware Slot 1 Read-Only: N/A 00:13:14.534 Firmware Activation Without Reset: N/A 00:13:14.534 Multiple Update Detection Support: N/A 00:13:14.534 Firmware Update Granularity: No Information Provided 00:13:14.534 Per-Namespace SMART Log: No 00:13:14.534 Asymmetric Namespace Access Log Page: Not Supported 00:13:14.534 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:14.534 Command Effects Log Page: Supported 00:13:14.534 Get Log Page Extended Data: Supported 00:13:14.534 Telemetry Log Pages: Not Supported 00:13:14.534 Persistent Event Log Pages: Not Supported 00:13:14.534 Supported Log Pages Log Page: May Support 00:13:14.534 Commands Supported & Effects Log Page: Not Supported 00:13:14.534 Feature Identifiers & Effects Log Page:May Support 00:13:14.534 NVMe-MI Commands & Effects Log Page: May Support 00:13:14.534 Data Area 4 for Telemetry Log: Not Supported 00:13:14.534 Error Log Page Entries Supported: 128 00:13:14.534 Keep Alive: Supported 00:13:14.534 Keep Alive Granularity: 10000 ms 00:13:14.534 00:13:14.534 NVM Command Set Attributes 00:13:14.534 ========================== 00:13:14.534 Submission Queue Entry Size 00:13:14.534 Max: 64 00:13:14.534 Min: 64 00:13:14.534 Completion Queue Entry Size 00:13:14.534 Max: 16 00:13:14.534 Min: 16 00:13:14.534 Number of Namespaces: 32 00:13:14.534 Compare Command: Supported 00:13:14.534 Write Uncorrectable Command: Not Supported 00:13:14.534 Dataset Management Command: Supported 00:13:14.534 Write Zeroes Command: Supported 00:13:14.534 Set Features Save Field: Not Supported 00:13:14.534 Reservations: Not Supported 00:13:14.534 Timestamp: Not Supported 00:13:14.534 Copy: Supported 00:13:14.534 Volatile Write Cache: Present 00:13:14.534 Atomic Write Unit (Normal): 1 00:13:14.534 Atomic Write Unit (PFail): 1 00:13:14.534 Atomic Compare & Write Unit: 1 00:13:14.534 Fused Compare & Write: Supported 00:13:14.534 Scatter-Gather List 00:13:14.534 SGL Command Set: Supported (Dword aligned) 00:13:14.534 SGL Keyed: Not Supported 00:13:14.534 SGL Bit Bucket Descriptor: Not Supported 00:13:14.534 SGL Metadata Pointer: Not Supported 00:13:14.534 Oversized SGL: Not Supported 00:13:14.534 SGL Metadata Address: Not Supported 00:13:14.534 SGL Offset: Not Supported 00:13:14.534 Transport SGL Data Block: Not Supported 00:13:14.534 Replay Protected Memory Block: Not Supported 00:13:14.534 00:13:14.534 Firmware Slot Information 00:13:14.534 ========================= 00:13:14.534 Active slot: 1 00:13:14.534 Slot 1 Firmware Revision: 25.01 00:13:14.534 00:13:14.534 00:13:14.534 Commands Supported and Effects 00:13:14.534 ============================== 00:13:14.534 Admin Commands 00:13:14.534 -------------- 00:13:14.534 Get Log Page (02h): Supported 00:13:14.534 Identify (06h): Supported 00:13:14.534 Abort (08h): Supported 00:13:14.534 Set Features (09h): Supported 00:13:14.534 Get Features (0Ah): Supported 00:13:14.534 Asynchronous Event Request (0Ch): Supported 00:13:14.534 Keep Alive (18h): Supported 00:13:14.534 I/O Commands 00:13:14.534 ------------ 00:13:14.534 Flush (00h): Supported LBA-Change 00:13:14.534 Write (01h): Supported LBA-Change 00:13:14.534 Read (02h): Supported 00:13:14.534 Compare (05h): Supported 00:13:14.534 Write Zeroes (08h): Supported LBA-Change 00:13:14.534 Dataset Management (09h): Supported LBA-Change 00:13:14.534 Copy (19h): Supported LBA-Change 00:13:14.534 00:13:14.534 Error Log 00:13:14.534 ========= 00:13:14.534 00:13:14.534 Arbitration 00:13:14.534 =========== 00:13:14.534 Arbitration Burst: 1 00:13:14.534 00:13:14.534 Power Management 00:13:14.534 ================ 00:13:14.534 Number of Power States: 1 00:13:14.534 Current Power State: Power State #0 00:13:14.534 Power State #0: 00:13:14.534 Max Power: 0.00 W 00:13:14.534 Non-Operational State: Operational 00:13:14.534 Entry Latency: Not Reported 00:13:14.534 Exit Latency: Not Reported 00:13:14.534 Relative Read Throughput: 0 00:13:14.534 Relative Read Latency: 0 00:13:14.534 Relative Write Throughput: 0 00:13:14.534 Relative Write Latency: 0 00:13:14.534 Idle Power: Not Reported 00:13:14.534 Active Power: Not Reported 00:13:14.534 Non-Operational Permissive Mode: Not Supported 00:13:14.534 00:13:14.534 Health Information 00:13:14.534 ================== 00:13:14.534 Critical Warnings: 00:13:14.534 Available Spare Space: OK 00:13:14.534 Temperature: OK 00:13:14.534 Device Reliability: OK 00:13:14.535 Read Only: No 00:13:14.535 Volatile Memory Backup: OK 00:13:14.535 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:14.535 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:14.535 Available Spare: 0% 00:13:14.535 Available Sp[2024-11-19 17:31:16.540748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:14.535 [2024-11-19 17:31:16.540762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:14.535 [2024-11-19 17:31:16.540787] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:14.535 [2024-11-19 17:31:16.540796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.535 [2024-11-19 17:31:16.540802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.535 [2024-11-19 17:31:16.540807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.535 [2024-11-19 17:31:16.540813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.535 [2024-11-19 17:31:16.540898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:14.535 [2024-11-19 17:31:16.540907] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:14.535 [2024-11-19 17:31:16.541906] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:14.535 [2024-11-19 17:31:16.541960] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:14.535 [2024-11-19 17:31:16.541967] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:14.535 [2024-11-19 17:31:16.542916] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:14.535 [2024-11-19 17:31:16.542926] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:14.535 [2024-11-19 17:31:16.542979] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:14.535 [2024-11-19 17:31:16.545957] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.535 are Threshold: 0% 00:13:14.535 Life Percentage Used: 0% 00:13:14.535 Data Units Read: 0 00:13:14.535 Data Units Written: 0 00:13:14.535 Host Read Commands: 0 00:13:14.535 Host Write Commands: 0 00:13:14.535 Controller Busy Time: 0 minutes 00:13:14.535 Power Cycles: 0 00:13:14.535 Power On Hours: 0 hours 00:13:14.535 Unsafe Shutdowns: 0 00:13:14.535 Unrecoverable Media Errors: 0 00:13:14.535 Lifetime Error Log Entries: 0 00:13:14.535 Warning Temperature Time: 0 minutes 00:13:14.535 Critical Temperature Time: 0 minutes 00:13:14.535 00:13:14.535 Number of Queues 00:13:14.535 ================ 00:13:14.535 Number of I/O Submission Queues: 127 00:13:14.535 Number of I/O Completion Queues: 127 00:13:14.535 00:13:14.535 Active Namespaces 00:13:14.535 ================= 00:13:14.535 Namespace ID:1 00:13:14.535 Error Recovery Timeout: Unlimited 00:13:14.535 Command Set Identifier: NVM (00h) 00:13:14.535 Deallocate: Supported 00:13:14.535 Deallocated/Unwritten Error: Not Supported 00:13:14.535 Deallocated Read Value: Unknown 00:13:14.535 Deallocate in Write Zeroes: Not Supported 00:13:14.535 Deallocated Guard Field: 0xFFFF 00:13:14.535 Flush: Supported 00:13:14.535 Reservation: Supported 00:13:14.535 Namespace Sharing Capabilities: Multiple Controllers 00:13:14.535 Size (in LBAs): 131072 (0GiB) 00:13:14.535 Capacity (in LBAs): 131072 (0GiB) 00:13:14.535 Utilization (in LBAs): 131072 (0GiB) 00:13:14.535 NGUID: 3B295A2E9FC14F02B9AE0DC19A410B48 00:13:14.535 UUID: 3b295a2e-9fc1-4f02-b9ae-0dc19a410b48 00:13:14.535 Thin Provisioning: Not Supported 00:13:14.535 Per-NS Atomic Units: Yes 00:13:14.535 Atomic Boundary Size (Normal): 0 00:13:14.535 Atomic Boundary Size (PFail): 0 00:13:14.535 Atomic Boundary Offset: 0 00:13:14.535 Maximum Single Source Range Length: 65535 00:13:14.535 Maximum Copy Length: 65535 00:13:14.535 Maximum Source Range Count: 1 00:13:14.535 NGUID/EUI64 Never Reused: No 00:13:14.535 Namespace Write Protected: No 00:13:14.535 Number of LBA Formats: 1 00:13:14.535 Current LBA Format: LBA Format #00 00:13:14.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:14.535 00:13:14.535 17:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:14.794 [2024-11-19 17:31:16.781786] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:20.067 Initializing NVMe Controllers 00:13:20.067 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:20.067 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:20.067 Initialization complete. Launching workers. 00:13:20.067 ======================================================== 00:13:20.067 Latency(us) 00:13:20.067 Device Information : IOPS MiB/s Average min max 00:13:20.067 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39931.80 155.98 3205.06 953.31 8182.16 00:13:20.067 ======================================================== 00:13:20.067 Total : 39931.80 155.98 3205.06 953.31 8182.16 00:13:20.067 00:13:20.067 [2024-11-19 17:31:21.801216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:20.067 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:20.067 [2024-11-19 17:31:22.036341] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:25.340 Initializing NVMe Controllers 00:13:25.340 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:25.340 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:25.340 Initialization complete. Launching workers. 00:13:25.340 ======================================================== 00:13:25.340 Latency(us) 00:13:25.340 Device Information : IOPS MiB/s Average min max 00:13:25.340 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7971.79 4985.67 9979.72 00:13:25.340 ======================================================== 00:13:25.340 Total : 16076.80 62.80 7971.79 4985.67 9979.72 00:13:25.340 00:13:25.340 [2024-11-19 17:31:27.073974] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:25.340 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:25.340 [2024-11-19 17:31:27.289982] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.610 [2024-11-19 17:31:32.368246] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.610 Initializing NVMe Controllers 00:13:30.610 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.610 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:30.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:30.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:30.610 Initialization complete. Launching workers. 00:13:30.610 Starting thread on core 2 00:13:30.610 Starting thread on core 3 00:13:30.610 Starting thread on core 1 00:13:30.610 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:30.610 [2024-11-19 17:31:32.660073] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.901 [2024-11-19 17:31:35.718044] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.901 Initializing NVMe Controllers 00:13:33.901 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.901 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:33.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:33.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:33.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:33.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:33.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:33.901 Initialization complete. Launching workers. 00:13:33.901 Starting thread on core 1 with urgent priority queue 00:13:33.901 Starting thread on core 2 with urgent priority queue 00:13:33.901 Starting thread on core 3 with urgent priority queue 00:13:33.901 Starting thread on core 0 with urgent priority queue 00:13:33.901 SPDK bdev Controller (SPDK1 ) core 0: 8709.33 IO/s 11.48 secs/100000 ios 00:13:33.901 SPDK bdev Controller (SPDK1 ) core 1: 7670.00 IO/s 13.04 secs/100000 ios 00:13:33.901 SPDK bdev Controller (SPDK1 ) core 2: 7690.33 IO/s 13.00 secs/100000 ios 00:13:33.901 SPDK bdev Controller (SPDK1 ) core 3: 8662.67 IO/s 11.54 secs/100000 ios 00:13:33.901 ======================================================== 00:13:33.901 00:13:33.901 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:33.901 [2024-11-19 17:31:36.000850] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.901 Initializing NVMe Controllers 00:13:33.901 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.901 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.901 Namespace ID: 1 size: 0GB 00:13:33.901 Initialization complete. 00:13:33.901 INFO: using host memory buffer for IO 00:13:33.901 Hello world! 00:13:33.901 [2024-11-19 17:31:36.035084] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.901 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:34.160 [2024-11-19 17:31:36.326362] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.541 Initializing NVMe Controllers 00:13:35.541 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.541 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.541 Initialization complete. Launching workers. 00:13:35.541 submit (in ns) avg, min, max = 6919.3, 3287.0, 4000115.7 00:13:35.541 complete (in ns) avg, min, max = 20244.8, 1826.1, 4170507.8 00:13:35.541 00:13:35.541 Submit histogram 00:13:35.541 ================ 00:13:35.541 Range in us Cumulative Count 00:13:35.541 3.283 - 3.297: 0.0306% ( 5) 00:13:35.541 3.297 - 3.311: 0.0979% ( 11) 00:13:35.541 3.311 - 3.325: 0.2081% ( 18) 00:13:35.541 3.325 - 3.339: 0.9488% ( 121) 00:13:35.541 3.339 - 3.353: 3.4645% ( 411) 00:13:35.541 3.353 - 3.367: 8.5511% ( 831) 00:13:35.541 3.367 - 3.381: 14.1948% ( 922) 00:13:35.541 3.381 - 3.395: 20.3465% ( 1005) 00:13:35.541 3.395 - 3.409: 26.8899% ( 1069) 00:13:35.541 3.409 - 3.423: 32.3499% ( 892) 00:13:35.541 3.423 - 3.437: 38.3057% ( 973) 00:13:35.541 3.437 - 3.450: 43.7657% ( 892) 00:13:35.541 3.450 - 3.464: 48.0749% ( 704) 00:13:35.541 3.464 - 3.478: 51.9863% ( 639) 00:13:35.541 3.478 - 3.492: 57.7401% ( 940) 00:13:35.541 3.492 - 3.506: 64.6630% ( 1131) 00:13:35.541 3.506 - 3.520: 69.4375% ( 780) 00:13:35.541 3.520 - 3.534: 73.5998% ( 680) 00:13:35.541 3.534 - 3.548: 78.7782% ( 846) 00:13:35.541 3.548 - 3.562: 82.8365% ( 663) 00:13:35.541 3.562 - 3.590: 86.8397% ( 654) 00:13:35.541 3.590 - 3.617: 87.5987% ( 124) 00:13:35.541 3.617 - 3.645: 88.3271% ( 119) 00:13:35.541 3.645 - 3.673: 89.8574% ( 250) 00:13:35.541 3.673 - 3.701: 91.6998% ( 301) 00:13:35.541 3.701 - 3.729: 93.4872% ( 292) 00:13:35.541 3.729 - 3.757: 95.1705% ( 275) 00:13:35.541 3.757 - 3.784: 96.7558% ( 259) 00:13:35.541 3.784 - 3.812: 98.0351% ( 209) 00:13:35.541 3.812 - 3.840: 98.8370% ( 131) 00:13:35.541 3.840 - 3.868: 99.2410% ( 66) 00:13:35.541 3.868 - 3.896: 99.5287% ( 47) 00:13:35.541 3.896 - 3.923: 99.6083% ( 13) 00:13:35.541 3.923 - 3.951: 99.6511% ( 7) 00:13:35.541 5.148 - 5.176: 99.6572% ( 1) 00:13:35.541 5.231 - 5.259: 99.6695% ( 2) 00:13:35.541 5.259 - 5.287: 99.6756% ( 1) 00:13:35.541 5.287 - 5.315: 99.6817% ( 1) 00:13:35.541 5.315 - 5.343: 99.6878% ( 1) 00:13:35.541 5.343 - 5.370: 99.7062% ( 3) 00:13:35.541 5.370 - 5.398: 99.7123% ( 1) 00:13:35.541 5.398 - 5.426: 99.7184% ( 1) 00:13:35.541 5.426 - 5.454: 99.7246% ( 1) 00:13:35.541 5.454 - 5.482: 99.7429% ( 3) 00:13:35.541 5.482 - 5.510: 99.7490% ( 1) 00:13:35.541 5.621 - 5.649: 99.7552% ( 1) 00:13:35.541 5.649 - 5.677: 99.7613% ( 1) 00:13:35.541 5.899 - 5.927: 99.7674% ( 1) 00:13:35.541 5.983 - 6.010: 99.7735% ( 1) 00:13:35.541 6.066 - 6.094: 99.7796% ( 1) 00:13:35.541 6.122 - 6.150: 99.7919% ( 2) 00:13:35.541 6.233 - 6.261: 99.7980% ( 1) 00:13:35.541 6.344 - 6.372: 99.8041% ( 1) 00:13:35.541 6.428 - 6.456: 99.8102% ( 1) 00:13:35.541 6.567 - 6.595: 99.8164% ( 1) 00:13:35.541 6.595 - 6.623: 99.8286% ( 2) 00:13:35.541 6.706 - 6.734: 99.8347% ( 1) 00:13:35.541 6.762 - 6.790: 99.8470% ( 2) 00:13:35.541 6.845 - 6.873: 99.8531% ( 1) 00:13:35.541 6.873 - 6.901: 99.8592% ( 1) 00:13:35.541 6.901 - 6.929: 99.8653% ( 1) 00:13:35.541 7.235 - 7.290: 99.8715% ( 1) 00:13:35.541 7.680 - 7.736: 99.8776% ( 1) 00:13:35.541 8.070 - 8.125: 99.8837% ( 1) 00:13:35.541 8.292 - 8.348: 99.8898% ( 1) 00:13:35.541 8.403 - 8.459: 99.8959% ( 1) 00:13:35.541 8.515 - 8.570: 99.9021% ( 1) 00:13:35.541 9.071 - 9.127: 99.9082% ( 1) 00:13:35.541 10.908 - 10.963: 99.9143% ( 1) 00:13:35.541 3989.148 - 4017.642: 100.0000% ( 14) 00:13:35.541 00:13:35.541 Complete histogram 00:13:35.541 ================== 00:13:35.541 Range in us Cumulative Count 00:13:35.541 1.823 - 1.837: 0.1347% ( 22) 00:13:35.541 1.837 - 1.850: 1.3283% ( 195) 00:13:35.541 1.850 - 1.864: 2.6076% ( 209) 00:13:35.541 1.864 - 1.878: 8.1716% ( 909) 00:13:35.541 1.878 - 1.892: 60.2803% ( 8513) 00:13:35.541 1.892 - 1.906: 87.2865% ( 4412) 00:13:35.541 1.906 - 1.920: 93.2240% ( 970) 00:13:35.541 1.920 - [2024-11-19 17:31:37.348331] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.541 1.934: 94.4788% ( 205) 00:13:35.541 1.934 - 1.948: 95.0174% ( 88) 00:13:35.541 1.948 - 1.962: 96.7375% ( 281) 00:13:35.541 1.962 - 1.976: 98.5309% ( 293) 00:13:35.541 1.976 - 1.990: 99.2410% ( 116) 00:13:35.541 1.990 - 2.003: 99.3267% ( 14) 00:13:35.541 2.003 - 2.017: 99.3328% ( 1) 00:13:35.541 2.017 - 2.031: 99.3450% ( 2) 00:13:35.541 2.031 - 2.045: 99.3512% ( 1) 00:13:35.541 2.059 - 2.073: 99.3695% ( 3) 00:13:35.541 2.101 - 2.115: 99.3757% ( 1) 00:13:35.541 2.393 - 2.407: 99.3818% ( 1) 00:13:35.541 3.840 - 3.868: 99.3879% ( 1) 00:13:35.541 3.868 - 3.896: 99.4063% ( 3) 00:13:35.541 3.923 - 3.951: 99.4124% ( 1) 00:13:35.541 3.951 - 3.979: 99.4246% ( 2) 00:13:35.541 4.397 - 4.424: 99.4307% ( 1) 00:13:35.541 4.619 - 4.647: 99.4369% ( 1) 00:13:35.541 5.120 - 5.148: 99.4491% ( 2) 00:13:35.541 5.259 - 5.287: 99.4552% ( 1) 00:13:35.541 5.343 - 5.370: 99.4613% ( 1) 00:13:35.541 5.370 - 5.398: 99.4797% ( 3) 00:13:35.541 5.537 - 5.565: 99.4858% ( 1) 00:13:35.541 5.816 - 5.843: 99.4920% ( 1) 00:13:35.541 5.983 - 6.010: 99.4981% ( 1) 00:13:35.541 6.066 - 6.094: 99.5042% ( 1) 00:13:35.541 6.150 - 6.177: 99.5103% ( 1) 00:13:35.541 6.483 - 6.511: 99.5164% ( 1) 00:13:35.541 7.569 - 7.624: 99.5226% ( 1) 00:13:35.541 7.847 - 7.903: 99.5348% ( 2) 00:13:35.541 8.125 - 8.181: 99.5409% ( 1) 00:13:35.541 3989.148 - 4017.642: 99.9939% ( 74) 00:13:35.541 4160.111 - 4188.605: 100.0000% ( 1) 00:13:35.541 00:13:35.541 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:35.541 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:35.541 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:35.541 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:35.541 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:35.541 [ 00:13:35.541 { 00:13:35.541 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.541 "subtype": "Discovery", 00:13:35.541 "listen_addresses": [], 00:13:35.541 "allow_any_host": true, 00:13:35.541 "hosts": [] 00:13:35.541 }, 00:13:35.541 { 00:13:35.541 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:35.541 "subtype": "NVMe", 00:13:35.541 "listen_addresses": [ 00:13:35.541 { 00:13:35.541 "trtype": "VFIOUSER", 00:13:35.541 "adrfam": "IPv4", 00:13:35.541 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:35.541 "trsvcid": "0" 00:13:35.541 } 00:13:35.541 ], 00:13:35.541 "allow_any_host": true, 00:13:35.541 "hosts": [], 00:13:35.541 "serial_number": "SPDK1", 00:13:35.541 "model_number": "SPDK bdev Controller", 00:13:35.541 "max_namespaces": 32, 00:13:35.541 "min_cntlid": 1, 00:13:35.541 "max_cntlid": 65519, 00:13:35.541 "namespaces": [ 00:13:35.541 { 00:13:35.541 "nsid": 1, 00:13:35.541 "bdev_name": "Malloc1", 00:13:35.541 "name": "Malloc1", 00:13:35.541 "nguid": "3B295A2E9FC14F02B9AE0DC19A410B48", 00:13:35.541 "uuid": "3b295a2e-9fc1-4f02-b9ae-0dc19a410b48" 00:13:35.541 } 00:13:35.541 ] 00:13:35.541 }, 00:13:35.541 { 00:13:35.541 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:35.541 "subtype": "NVMe", 00:13:35.541 "listen_addresses": [ 00:13:35.541 { 00:13:35.541 "trtype": "VFIOUSER", 00:13:35.542 "adrfam": "IPv4", 00:13:35.542 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:35.542 "trsvcid": "0" 00:13:35.542 } 00:13:35.542 ], 00:13:35.542 "allow_any_host": true, 00:13:35.542 "hosts": [], 00:13:35.542 "serial_number": "SPDK2", 00:13:35.542 "model_number": "SPDK bdev Controller", 00:13:35.542 "max_namespaces": 32, 00:13:35.542 "min_cntlid": 1, 00:13:35.542 "max_cntlid": 65519, 00:13:35.542 "namespaces": [ 00:13:35.542 { 00:13:35.542 "nsid": 1, 00:13:35.542 "bdev_name": "Malloc2", 00:13:35.542 "name": "Malloc2", 00:13:35.542 "nguid": "045B37828EBA4F7CBDAD8A1E56FF30AC", 00:13:35.542 "uuid": "045b3782-8eba-4f7c-bdad-8a1e56ff30ac" 00:13:35.542 } 00:13:35.542 ] 00:13:35.542 } 00:13:35.542 ] 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3421249 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:35.542 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:35.801 [2024-11-19 17:31:37.767363] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.801 Malloc3 00:13:35.801 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:35.801 [2024-11-19 17:31:37.994088] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.801 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:36.061 Asynchronous Event Request test 00:13:36.061 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.061 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.061 Registering asynchronous event callbacks... 00:13:36.061 Starting namespace attribute notice tests for all controllers... 00:13:36.061 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:36.061 aer_cb - Changed Namespace 00:13:36.061 Cleaning up... 00:13:36.061 [ 00:13:36.061 { 00:13:36.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:36.061 "subtype": "Discovery", 00:13:36.061 "listen_addresses": [], 00:13:36.061 "allow_any_host": true, 00:13:36.061 "hosts": [] 00:13:36.061 }, 00:13:36.061 { 00:13:36.061 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:36.061 "subtype": "NVMe", 00:13:36.061 "listen_addresses": [ 00:13:36.061 { 00:13:36.061 "trtype": "VFIOUSER", 00:13:36.061 "adrfam": "IPv4", 00:13:36.061 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:36.061 "trsvcid": "0" 00:13:36.061 } 00:13:36.061 ], 00:13:36.061 "allow_any_host": true, 00:13:36.061 "hosts": [], 00:13:36.061 "serial_number": "SPDK1", 00:13:36.061 "model_number": "SPDK bdev Controller", 00:13:36.061 "max_namespaces": 32, 00:13:36.061 "min_cntlid": 1, 00:13:36.061 "max_cntlid": 65519, 00:13:36.061 "namespaces": [ 00:13:36.061 { 00:13:36.061 "nsid": 1, 00:13:36.061 "bdev_name": "Malloc1", 00:13:36.061 "name": "Malloc1", 00:13:36.061 "nguid": "3B295A2E9FC14F02B9AE0DC19A410B48", 00:13:36.061 "uuid": "3b295a2e-9fc1-4f02-b9ae-0dc19a410b48" 00:13:36.061 }, 00:13:36.061 { 00:13:36.061 "nsid": 2, 00:13:36.061 "bdev_name": "Malloc3", 00:13:36.061 "name": "Malloc3", 00:13:36.061 "nguid": "242510488E8E4C91A6DD862B772C5E4F", 00:13:36.061 "uuid": "24251048-8e8e-4c91-a6dd-862b772c5e4f" 00:13:36.061 } 00:13:36.061 ] 00:13:36.061 }, 00:13:36.061 { 00:13:36.061 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:36.061 "subtype": "NVMe", 00:13:36.061 "listen_addresses": [ 00:13:36.061 { 00:13:36.061 "trtype": "VFIOUSER", 00:13:36.061 "adrfam": "IPv4", 00:13:36.061 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:36.061 "trsvcid": "0" 00:13:36.061 } 00:13:36.061 ], 00:13:36.061 "allow_any_host": true, 00:13:36.061 "hosts": [], 00:13:36.061 "serial_number": "SPDK2", 00:13:36.061 "model_number": "SPDK bdev Controller", 00:13:36.061 "max_namespaces": 32, 00:13:36.061 "min_cntlid": 1, 00:13:36.061 "max_cntlid": 65519, 00:13:36.061 "namespaces": [ 00:13:36.061 { 00:13:36.061 "nsid": 1, 00:13:36.061 "bdev_name": "Malloc2", 00:13:36.061 "name": "Malloc2", 00:13:36.061 "nguid": "045B37828EBA4F7CBDAD8A1E56FF30AC", 00:13:36.061 "uuid": "045b3782-8eba-4f7c-bdad-8a1e56ff30ac" 00:13:36.061 } 00:13:36.061 ] 00:13:36.061 } 00:13:36.061 ] 00:13:36.061 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3421249 00:13:36.061 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:36.061 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:36.061 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:36.061 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:36.061 [2024-11-19 17:31:38.242801] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:13:36.061 [2024-11-19 17:31:38.242845] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421345 ] 00:13:36.322 [2024-11-19 17:31:38.282747] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:36.322 [2024-11-19 17:31:38.291187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:36.322 [2024-11-19 17:31:38.291209] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbdf5983000 00:13:36.322 [2024-11-19 17:31:38.292191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.293197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.294204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.295209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.296214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.297227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.298229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.322 [2024-11-19 17:31:38.299240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.323 [2024-11-19 17:31:38.300247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:36.323 [2024-11-19 17:31:38.300257] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbdf5978000 00:13:36.323 [2024-11-19 17:31:38.301196] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:36.323 [2024-11-19 17:31:38.315400] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:36.323 [2024-11-19 17:31:38.315433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:36.323 [2024-11-19 17:31:38.317503] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:36.323 [2024-11-19 17:31:38.317542] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:36.323 [2024-11-19 17:31:38.317607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:36.323 [2024-11-19 17:31:38.317620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:36.323 [2024-11-19 17:31:38.317625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:36.323 [2024-11-19 17:31:38.318505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:36.323 [2024-11-19 17:31:38.318515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:36.323 [2024-11-19 17:31:38.318522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:36.323 [2024-11-19 17:31:38.319517] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:36.323 [2024-11-19 17:31:38.319527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:36.323 [2024-11-19 17:31:38.319533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:36.323 [2024-11-19 17:31:38.320524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:36.323 [2024-11-19 17:31:38.320532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:36.323 [2024-11-19 17:31:38.321529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:36.323 [2024-11-19 17:31:38.321538] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:36.323 [2024-11-19 17:31:38.321542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:36.323 [2024-11-19 17:31:38.321548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:36.323 [2024-11-19 17:31:38.321656] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:36.323 [2024-11-19 17:31:38.321660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:36.323 [2024-11-19 17:31:38.321665] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:36.323 [2024-11-19 17:31:38.322538] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:36.323 [2024-11-19 17:31:38.323545] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:36.323 [2024-11-19 17:31:38.324555] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:36.323 [2024-11-19 17:31:38.325553] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:36.323 [2024-11-19 17:31:38.325591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:36.323 [2024-11-19 17:31:38.326560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:36.323 [2024-11-19 17:31:38.326569] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:36.323 [2024-11-19 17:31:38.326573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.326590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:36.323 [2024-11-19 17:31:38.326598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.326609] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.323 [2024-11-19 17:31:38.326614] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.323 [2024-11-19 17:31:38.326617] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.323 [2024-11-19 17:31:38.326629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.323 [2024-11-19 17:31:38.336955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:36.323 [2024-11-19 17:31:38.336967] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:36.323 [2024-11-19 17:31:38.336971] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:36.323 [2024-11-19 17:31:38.336975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:36.323 [2024-11-19 17:31:38.336982] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:36.323 [2024-11-19 17:31:38.336989] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:36.323 [2024-11-19 17:31:38.336994] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:36.323 [2024-11-19 17:31:38.336998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.337006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.337016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:36.323 [2024-11-19 17:31:38.344953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:36.323 [2024-11-19 17:31:38.344964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.323 [2024-11-19 17:31:38.344972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.323 [2024-11-19 17:31:38.344979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.323 [2024-11-19 17:31:38.344987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.323 [2024-11-19 17:31:38.344992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.344997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.345006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:36.323 [2024-11-19 17:31:38.352960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:36.323 [2024-11-19 17:31:38.352970] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:36.323 [2024-11-19 17:31:38.352975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.352981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.352987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.352995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:36.323 [2024-11-19 17:31:38.360953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:36.323 [2024-11-19 17:31:38.361008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.361016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:36.323 [2024-11-19 17:31:38.361022] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:36.323 [2024-11-19 17:31:38.361027] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:36.323 [2024-11-19 17:31:38.361032] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.323 [2024-11-19 17:31:38.361037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:36.323 [2024-11-19 17:31:38.368952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.368964] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:36.324 [2024-11-19 17:31:38.368972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.368980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.368986] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.324 [2024-11-19 17:31:38.368990] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.324 [2024-11-19 17:31:38.368993] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.324 [2024-11-19 17:31:38.368998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.376953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.376967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.376975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.376982] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.324 [2024-11-19 17:31:38.376986] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.324 [2024-11-19 17:31:38.376989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.324 [2024-11-19 17:31:38.376995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.384951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.384960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.384966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.384974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.384980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.384984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.384989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.384993] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:36.324 [2024-11-19 17:31:38.384999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:36.324 [2024-11-19 17:31:38.385004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:36.324 [2024-11-19 17:31:38.385020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.392953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.392966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.400953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.400964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.408952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.408964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.416953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.416967] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:36.324 [2024-11-19 17:31:38.416972] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:36.324 [2024-11-19 17:31:38.416975] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:36.324 [2024-11-19 17:31:38.416978] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:36.324 [2024-11-19 17:31:38.416981] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:36.324 [2024-11-19 17:31:38.416987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:36.324 [2024-11-19 17:31:38.416994] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:36.324 [2024-11-19 17:31:38.416998] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:36.324 [2024-11-19 17:31:38.417001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.324 [2024-11-19 17:31:38.417006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.417012] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:36.324 [2024-11-19 17:31:38.417016] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.324 [2024-11-19 17:31:38.417019] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.324 [2024-11-19 17:31:38.417025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.417032] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:36.324 [2024-11-19 17:31:38.417036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:36.324 [2024-11-19 17:31:38.417039] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.324 [2024-11-19 17:31:38.417044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:36.324 [2024-11-19 17:31:38.424951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.424966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.424975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:36.324 [2024-11-19 17:31:38.424982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:36.324 ===================================================== 00:13:36.324 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:36.324 ===================================================== 00:13:36.324 Controller Capabilities/Features 00:13:36.324 ================================ 00:13:36.324 Vendor ID: 4e58 00:13:36.324 Subsystem Vendor ID: 4e58 00:13:36.324 Serial Number: SPDK2 00:13:36.324 Model Number: SPDK bdev Controller 00:13:36.324 Firmware Version: 25.01 00:13:36.324 Recommended Arb Burst: 6 00:13:36.324 IEEE OUI Identifier: 8d 6b 50 00:13:36.324 Multi-path I/O 00:13:36.324 May have multiple subsystem ports: Yes 00:13:36.324 May have multiple controllers: Yes 00:13:36.324 Associated with SR-IOV VF: No 00:13:36.324 Max Data Transfer Size: 131072 00:13:36.324 Max Number of Namespaces: 32 00:13:36.324 Max Number of I/O Queues: 127 00:13:36.324 NVMe Specification Version (VS): 1.3 00:13:36.324 NVMe Specification Version (Identify): 1.3 00:13:36.324 Maximum Queue Entries: 256 00:13:36.324 Contiguous Queues Required: Yes 00:13:36.324 Arbitration Mechanisms Supported 00:13:36.324 Weighted Round Robin: Not Supported 00:13:36.324 Vendor Specific: Not Supported 00:13:36.324 Reset Timeout: 15000 ms 00:13:36.324 Doorbell Stride: 4 bytes 00:13:36.324 NVM Subsystem Reset: Not Supported 00:13:36.324 Command Sets Supported 00:13:36.324 NVM Command Set: Supported 00:13:36.324 Boot Partition: Not Supported 00:13:36.324 Memory Page Size Minimum: 4096 bytes 00:13:36.324 Memory Page Size Maximum: 4096 bytes 00:13:36.324 Persistent Memory Region: Not Supported 00:13:36.324 Optional Asynchronous Events Supported 00:13:36.324 Namespace Attribute Notices: Supported 00:13:36.324 Firmware Activation Notices: Not Supported 00:13:36.324 ANA Change Notices: Not Supported 00:13:36.324 PLE Aggregate Log Change Notices: Not Supported 00:13:36.324 LBA Status Info Alert Notices: Not Supported 00:13:36.324 EGE Aggregate Log Change Notices: Not Supported 00:13:36.324 Normal NVM Subsystem Shutdown event: Not Supported 00:13:36.324 Zone Descriptor Change Notices: Not Supported 00:13:36.324 Discovery Log Change Notices: Not Supported 00:13:36.324 Controller Attributes 00:13:36.324 128-bit Host Identifier: Supported 00:13:36.324 Non-Operational Permissive Mode: Not Supported 00:13:36.324 NVM Sets: Not Supported 00:13:36.324 Read Recovery Levels: Not Supported 00:13:36.324 Endurance Groups: Not Supported 00:13:36.324 Predictable Latency Mode: Not Supported 00:13:36.324 Traffic Based Keep ALive: Not Supported 00:13:36.324 Namespace Granularity: Not Supported 00:13:36.324 SQ Associations: Not Supported 00:13:36.324 UUID List: Not Supported 00:13:36.324 Multi-Domain Subsystem: Not Supported 00:13:36.324 Fixed Capacity Management: Not Supported 00:13:36.324 Variable Capacity Management: Not Supported 00:13:36.324 Delete Endurance Group: Not Supported 00:13:36.324 Delete NVM Set: Not Supported 00:13:36.324 Extended LBA Formats Supported: Not Supported 00:13:36.324 Flexible Data Placement Supported: Not Supported 00:13:36.325 00:13:36.325 Controller Memory Buffer Support 00:13:36.325 ================================ 00:13:36.325 Supported: No 00:13:36.325 00:13:36.325 Persistent Memory Region Support 00:13:36.325 ================================ 00:13:36.325 Supported: No 00:13:36.325 00:13:36.325 Admin Command Set Attributes 00:13:36.325 ============================ 00:13:36.325 Security Send/Receive: Not Supported 00:13:36.325 Format NVM: Not Supported 00:13:36.325 Firmware Activate/Download: Not Supported 00:13:36.325 Namespace Management: Not Supported 00:13:36.325 Device Self-Test: Not Supported 00:13:36.325 Directives: Not Supported 00:13:36.325 NVMe-MI: Not Supported 00:13:36.325 Virtualization Management: Not Supported 00:13:36.325 Doorbell Buffer Config: Not Supported 00:13:36.325 Get LBA Status Capability: Not Supported 00:13:36.325 Command & Feature Lockdown Capability: Not Supported 00:13:36.325 Abort Command Limit: 4 00:13:36.325 Async Event Request Limit: 4 00:13:36.325 Number of Firmware Slots: N/A 00:13:36.325 Firmware Slot 1 Read-Only: N/A 00:13:36.325 Firmware Activation Without Reset: N/A 00:13:36.325 Multiple Update Detection Support: N/A 00:13:36.325 Firmware Update Granularity: No Information Provided 00:13:36.325 Per-Namespace SMART Log: No 00:13:36.325 Asymmetric Namespace Access Log Page: Not Supported 00:13:36.325 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:36.325 Command Effects Log Page: Supported 00:13:36.325 Get Log Page Extended Data: Supported 00:13:36.325 Telemetry Log Pages: Not Supported 00:13:36.325 Persistent Event Log Pages: Not Supported 00:13:36.325 Supported Log Pages Log Page: May Support 00:13:36.325 Commands Supported & Effects Log Page: Not Supported 00:13:36.325 Feature Identifiers & Effects Log Page:May Support 00:13:36.325 NVMe-MI Commands & Effects Log Page: May Support 00:13:36.325 Data Area 4 for Telemetry Log: Not Supported 00:13:36.325 Error Log Page Entries Supported: 128 00:13:36.325 Keep Alive: Supported 00:13:36.325 Keep Alive Granularity: 10000 ms 00:13:36.325 00:13:36.325 NVM Command Set Attributes 00:13:36.325 ========================== 00:13:36.325 Submission Queue Entry Size 00:13:36.325 Max: 64 00:13:36.325 Min: 64 00:13:36.325 Completion Queue Entry Size 00:13:36.325 Max: 16 00:13:36.325 Min: 16 00:13:36.325 Number of Namespaces: 32 00:13:36.325 Compare Command: Supported 00:13:36.325 Write Uncorrectable Command: Not Supported 00:13:36.325 Dataset Management Command: Supported 00:13:36.325 Write Zeroes Command: Supported 00:13:36.325 Set Features Save Field: Not Supported 00:13:36.325 Reservations: Not Supported 00:13:36.325 Timestamp: Not Supported 00:13:36.325 Copy: Supported 00:13:36.325 Volatile Write Cache: Present 00:13:36.325 Atomic Write Unit (Normal): 1 00:13:36.325 Atomic Write Unit (PFail): 1 00:13:36.325 Atomic Compare & Write Unit: 1 00:13:36.325 Fused Compare & Write: Supported 00:13:36.325 Scatter-Gather List 00:13:36.325 SGL Command Set: Supported (Dword aligned) 00:13:36.325 SGL Keyed: Not Supported 00:13:36.325 SGL Bit Bucket Descriptor: Not Supported 00:13:36.325 SGL Metadata Pointer: Not Supported 00:13:36.325 Oversized SGL: Not Supported 00:13:36.325 SGL Metadata Address: Not Supported 00:13:36.325 SGL Offset: Not Supported 00:13:36.325 Transport SGL Data Block: Not Supported 00:13:36.325 Replay Protected Memory Block: Not Supported 00:13:36.325 00:13:36.325 Firmware Slot Information 00:13:36.325 ========================= 00:13:36.325 Active slot: 1 00:13:36.325 Slot 1 Firmware Revision: 25.01 00:13:36.325 00:13:36.325 00:13:36.325 Commands Supported and Effects 00:13:36.325 ============================== 00:13:36.325 Admin Commands 00:13:36.325 -------------- 00:13:36.325 Get Log Page (02h): Supported 00:13:36.325 Identify (06h): Supported 00:13:36.325 Abort (08h): Supported 00:13:36.325 Set Features (09h): Supported 00:13:36.325 Get Features (0Ah): Supported 00:13:36.325 Asynchronous Event Request (0Ch): Supported 00:13:36.325 Keep Alive (18h): Supported 00:13:36.325 I/O Commands 00:13:36.325 ------------ 00:13:36.325 Flush (00h): Supported LBA-Change 00:13:36.325 Write (01h): Supported LBA-Change 00:13:36.325 Read (02h): Supported 00:13:36.325 Compare (05h): Supported 00:13:36.325 Write Zeroes (08h): Supported LBA-Change 00:13:36.325 Dataset Management (09h): Supported LBA-Change 00:13:36.325 Copy (19h): Supported LBA-Change 00:13:36.325 00:13:36.325 Error Log 00:13:36.325 ========= 00:13:36.325 00:13:36.325 Arbitration 00:13:36.325 =========== 00:13:36.325 Arbitration Burst: 1 00:13:36.325 00:13:36.325 Power Management 00:13:36.325 ================ 00:13:36.325 Number of Power States: 1 00:13:36.325 Current Power State: Power State #0 00:13:36.325 Power State #0: 00:13:36.325 Max Power: 0.00 W 00:13:36.325 Non-Operational State: Operational 00:13:36.325 Entry Latency: Not Reported 00:13:36.325 Exit Latency: Not Reported 00:13:36.325 Relative Read Throughput: 0 00:13:36.325 Relative Read Latency: 0 00:13:36.325 Relative Write Throughput: 0 00:13:36.325 Relative Write Latency: 0 00:13:36.325 Idle Power: Not Reported 00:13:36.325 Active Power: Not Reported 00:13:36.325 Non-Operational Permissive Mode: Not Supported 00:13:36.325 00:13:36.325 Health Information 00:13:36.325 ================== 00:13:36.325 Critical Warnings: 00:13:36.325 Available Spare Space: OK 00:13:36.325 Temperature: OK 00:13:36.325 Device Reliability: OK 00:13:36.325 Read Only: No 00:13:36.325 Volatile Memory Backup: OK 00:13:36.325 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:36.325 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:36.325 Available Spare: 0% 00:13:36.325 Available Sp[2024-11-19 17:31:38.425074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:36.325 [2024-11-19 17:31:38.432953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:36.325 [2024-11-19 17:31:38.432983] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:36.325 [2024-11-19 17:31:38.432991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.325 [2024-11-19 17:31:38.432997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.325 [2024-11-19 17:31:38.433003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.325 [2024-11-19 17:31:38.433008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.325 [2024-11-19 17:31:38.433049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:36.325 [2024-11-19 17:31:38.433059] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:36.325 [2024-11-19 17:31:38.434056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:36.325 [2024-11-19 17:31:38.434099] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:36.325 [2024-11-19 17:31:38.434106] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:36.325 [2024-11-19 17:31:38.435062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:36.325 [2024-11-19 17:31:38.435074] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:36.325 [2024-11-19 17:31:38.435119] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:36.325 [2024-11-19 17:31:38.436098] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:36.325 are Threshold: 0% 00:13:36.325 Life Percentage Used: 0% 00:13:36.325 Data Units Read: 0 00:13:36.325 Data Units Written: 0 00:13:36.325 Host Read Commands: 0 00:13:36.325 Host Write Commands: 0 00:13:36.325 Controller Busy Time: 0 minutes 00:13:36.325 Power Cycles: 0 00:13:36.325 Power On Hours: 0 hours 00:13:36.325 Unsafe Shutdowns: 0 00:13:36.325 Unrecoverable Media Errors: 0 00:13:36.325 Lifetime Error Log Entries: 0 00:13:36.325 Warning Temperature Time: 0 minutes 00:13:36.325 Critical Temperature Time: 0 minutes 00:13:36.325 00:13:36.325 Number of Queues 00:13:36.325 ================ 00:13:36.325 Number of I/O Submission Queues: 127 00:13:36.325 Number of I/O Completion Queues: 127 00:13:36.325 00:13:36.325 Active Namespaces 00:13:36.325 ================= 00:13:36.325 Namespace ID:1 00:13:36.325 Error Recovery Timeout: Unlimited 00:13:36.325 Command Set Identifier: NVM (00h) 00:13:36.325 Deallocate: Supported 00:13:36.325 Deallocated/Unwritten Error: Not Supported 00:13:36.325 Deallocated Read Value: Unknown 00:13:36.325 Deallocate in Write Zeroes: Not Supported 00:13:36.326 Deallocated Guard Field: 0xFFFF 00:13:36.326 Flush: Supported 00:13:36.326 Reservation: Supported 00:13:36.326 Namespace Sharing Capabilities: Multiple Controllers 00:13:36.326 Size (in LBAs): 131072 (0GiB) 00:13:36.326 Capacity (in LBAs): 131072 (0GiB) 00:13:36.326 Utilization (in LBAs): 131072 (0GiB) 00:13:36.326 NGUID: 045B37828EBA4F7CBDAD8A1E56FF30AC 00:13:36.326 UUID: 045b3782-8eba-4f7c-bdad-8a1e56ff30ac 00:13:36.326 Thin Provisioning: Not Supported 00:13:36.326 Per-NS Atomic Units: Yes 00:13:36.326 Atomic Boundary Size (Normal): 0 00:13:36.326 Atomic Boundary Size (PFail): 0 00:13:36.326 Atomic Boundary Offset: 0 00:13:36.326 Maximum Single Source Range Length: 65535 00:13:36.326 Maximum Copy Length: 65535 00:13:36.326 Maximum Source Range Count: 1 00:13:36.326 NGUID/EUI64 Never Reused: No 00:13:36.326 Namespace Write Protected: No 00:13:36.326 Number of LBA Formats: 1 00:13:36.326 Current LBA Format: LBA Format #00 00:13:36.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:36.326 00:13:36.326 17:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:36.585 [2024-11-19 17:31:38.663350] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:41.857 Initializing NVMe Controllers 00:13:41.857 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:41.857 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:41.857 Initialization complete. Launching workers. 00:13:41.857 ======================================================== 00:13:41.857 Latency(us) 00:13:41.857 Device Information : IOPS MiB/s Average min max 00:13:41.857 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39905.96 155.88 3207.36 955.79 10606.87 00:13:41.857 ======================================================== 00:13:41.857 Total : 39905.96 155.88 3207.36 955.79 10606.87 00:13:41.857 00:13:41.857 [2024-11-19 17:31:43.764215] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:41.857 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:41.857 [2024-11-19 17:31:44.002942] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:47.163 Initializing NVMe Controllers 00:13:47.163 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:47.163 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:47.163 Initialization complete. Launching workers. 00:13:47.163 ======================================================== 00:13:47.163 Latency(us) 00:13:47.163 Device Information : IOPS MiB/s Average min max 00:13:47.163 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39840.56 155.63 3212.65 1045.71 7533.83 00:13:47.163 ======================================================== 00:13:47.163 Total : 39840.56 155.63 3212.65 1045.71 7533.83 00:13:47.163 00:13:47.163 [2024-11-19 17:31:49.026630] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:47.163 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:47.163 [2024-11-19 17:31:49.229022] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:52.505 [2024-11-19 17:31:54.374040] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:52.505 Initializing NVMe Controllers 00:13:52.505 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.505 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.505 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:52.505 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:52.505 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:52.505 Initialization complete. Launching workers. 00:13:52.505 Starting thread on core 2 00:13:52.505 Starting thread on core 3 00:13:52.505 Starting thread on core 1 00:13:52.506 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:52.506 [2024-11-19 17:31:54.673408] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.796 [2024-11-19 17:31:57.742183] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.796 Initializing NVMe Controllers 00:13:55.796 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.796 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:55.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:55.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:55.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:55.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:55.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:55.796 Initialization complete. Launching workers. 00:13:55.796 Starting thread on core 1 with urgent priority queue 00:13:55.796 Starting thread on core 2 with urgent priority queue 00:13:55.796 Starting thread on core 3 with urgent priority queue 00:13:55.796 Starting thread on core 0 with urgent priority queue 00:13:55.796 SPDK bdev Controller (SPDK2 ) core 0: 4205.67 IO/s 23.78 secs/100000 ios 00:13:55.796 SPDK bdev Controller (SPDK2 ) core 1: 4262.67 IO/s 23.46 secs/100000 ios 00:13:55.796 SPDK bdev Controller (SPDK2 ) core 2: 4648.33 IO/s 21.51 secs/100000 ios 00:13:55.796 SPDK bdev Controller (SPDK2 ) core 3: 3497.00 IO/s 28.60 secs/100000 ios 00:13:55.796 ======================================================== 00:13:55.796 00:13:55.796 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:56.055 [2024-11-19 17:31:58.035396] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.055 Initializing NVMe Controllers 00:13:56.055 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.055 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.055 Namespace ID: 1 size: 0GB 00:13:56.055 Initialization complete. 00:13:56.055 INFO: using host memory buffer for IO 00:13:56.055 Hello world! 00:13:56.055 [2024-11-19 17:31:58.044452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.055 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:56.314 [2024-11-19 17:31:58.332840] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:57.251 Initializing NVMe Controllers 00:13:57.251 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:57.251 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:57.251 Initialization complete. Launching workers. 00:13:57.251 submit (in ns) avg, min, max = 7223.6, 3213.9, 4000980.0 00:13:57.251 complete (in ns) avg, min, max = 21202.0, 1764.3, 4000435.7 00:13:57.251 00:13:57.251 Submit histogram 00:13:57.251 ================ 00:13:57.251 Range in us Cumulative Count 00:13:57.251 3.214 - 3.228: 0.0062% ( 1) 00:13:57.251 3.256 - 3.270: 0.0248% ( 3) 00:13:57.251 3.270 - 3.283: 0.0993% ( 12) 00:13:57.251 3.283 - 3.297: 0.2173% ( 19) 00:13:57.251 3.297 - 3.311: 0.4035% ( 30) 00:13:57.251 3.311 - 3.325: 0.7138% ( 50) 00:13:57.251 3.325 - 3.339: 1.3097% ( 96) 00:13:57.251 3.339 - 3.353: 3.4947% ( 352) 00:13:57.251 3.353 - 3.367: 7.7902% ( 692) 00:13:57.251 3.367 - 3.381: 13.6809% ( 949) 00:13:57.251 3.381 - 3.395: 19.5965% ( 953) 00:13:57.251 3.395 - 3.409: 25.8908% ( 1014) 00:13:57.251 3.409 - 3.423: 31.8746% ( 964) 00:13:57.251 3.423 - 3.437: 36.7225% ( 781) 00:13:57.251 3.437 - 3.450: 42.2408% ( 889) 00:13:57.251 3.450 - 3.464: 47.4364% ( 837) 00:13:57.251 3.464 - 3.478: 51.5084% ( 656) 00:13:57.251 3.478 - 3.492: 55.4500% ( 635) 00:13:57.251 3.492 - 3.506: 61.2228% ( 930) 00:13:57.251 3.506 - 3.520: 67.5357% ( 1017) 00:13:57.251 3.520 - 3.534: 71.6263% ( 659) 00:13:57.251 3.534 - 3.548: 76.0956% ( 720) 00:13:57.251 3.548 - 3.562: 80.7200% ( 745) 00:13:57.251 3.562 - 3.590: 85.9901% ( 849) 00:13:57.251 3.590 - 3.617: 87.3557% ( 220) 00:13:57.251 3.617 - 3.645: 88.1192% ( 123) 00:13:57.251 3.645 - 3.673: 89.4289% ( 211) 00:13:57.251 3.673 - 3.701: 91.1484% ( 277) 00:13:57.251 3.701 - 3.729: 92.8367% ( 272) 00:13:57.251 3.729 - 3.757: 94.5065% ( 269) 00:13:57.251 3.757 - 3.784: 96.2942% ( 288) 00:13:57.251 3.784 - 3.812: 97.6164% ( 213) 00:13:57.251 3.812 - 3.840: 98.5289% ( 147) 00:13:57.251 3.840 - 3.868: 99.1186% ( 95) 00:13:57.251 3.868 - 3.896: 99.4165% ( 48) 00:13:57.251 3.896 - 3.923: 99.5717% ( 25) 00:13:57.251 3.923 - 3.951: 99.6151% ( 7) 00:13:57.251 3.951 - 3.979: 99.6400% ( 4) 00:13:57.251 3.979 - 4.007: 99.6462% ( 1) 00:13:57.251 5.064 - 5.092: 99.6524% ( 1) 00:13:57.251 5.231 - 5.259: 99.6586% ( 1) 00:13:57.251 5.315 - 5.343: 99.6648% ( 1) 00:13:57.251 5.398 - 5.426: 99.6710% ( 1) 00:13:57.251 5.426 - 5.454: 99.6772% ( 1) 00:13:57.251 5.454 - 5.482: 99.6834% ( 1) 00:13:57.251 5.537 - 5.565: 99.6896% ( 1) 00:13:57.251 5.621 - 5.649: 99.7083% ( 3) 00:13:57.251 5.677 - 5.704: 99.7207% ( 2) 00:13:57.251 6.233 - 6.261: 99.7269% ( 1) 00:13:57.251 6.372 - 6.400: 99.7331% ( 1) 00:13:57.251 6.511 - 6.539: 99.7393% ( 1) 00:13:57.251 6.595 - 6.623: 99.7455% ( 1) 00:13:57.251 6.706 - 6.734: 99.7517% ( 1) 00:13:57.251 6.762 - 6.790: 99.7579% ( 1) 00:13:57.251 6.957 - 6.984: 99.7641% ( 1) 00:13:57.251 6.984 - 7.012: 99.7703% ( 1) 00:13:57.251 7.179 - 7.235: 99.7765% ( 1) 00:13:57.251 7.235 - 7.290: 99.7827% ( 1) 00:13:57.251 7.513 - 7.569: 99.7890% ( 1) 00:13:57.251 7.569 - 7.624: 99.7952% ( 1) 00:13:57.251 7.624 - 7.680: 99.8014% ( 1) 00:13:57.251 7.680 - 7.736: 99.8138% ( 2) 00:13:57.251 7.791 - 7.847: 99.8262% ( 2) 00:13:57.251 7.958 - 8.014: 99.8324% ( 1) 00:13:57.251 8.292 - 8.348: 99.8386% ( 1) 00:13:57.251 8.515 - 8.570: 99.8448% ( 1) 00:13:57.251 8.570 - 8.626: 99.8510% ( 1) 00:13:57.251 8.626 - 8.682: 99.8572% ( 1) 00:13:57.251 8.793 - 8.849: 99.8696% ( 2) 00:13:57.252 9.238 - 9.294: 99.8759% ( 1) 00:13:57.252 9.405 - 9.461: 99.8821% ( 1) 00:13:57.252 10.685 - 10.741: 99.8883% ( 1) 00:13:57.252 13.579 - 13.635: 99.8945% ( 1) 00:13:57.252 14.024 - 14.080: 99.9007% ( 1) 00:13:57.252 19.033 - 19.144: 99.9069% ( 1) 00:13:57.252 3989.148 - 4017.642: 100.0000% ( 15) 00:13:57.252 00:13:57.252 Complete histogram 00:13:57.252 ================== 00:13:57.252 Range in us Cumulative Count 00:13:57.252 1.760 - 1.767: 0.0186% ( 3) 00:13:57.252 1.767 - [2024-11-19 17:31:59.427113] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:57.252 1.774: 0.0807% ( 10) 00:13:57.252 1.774 - 1.781: 0.1738% ( 15) 00:13:57.252 1.781 - 1.795: 0.2607% ( 14) 00:13:57.252 1.795 - 1.809: 0.3042% ( 7) 00:13:57.252 1.809 - 1.823: 3.3892% ( 497) 00:13:57.252 1.823 - 1.837: 16.5239% ( 2116) 00:13:57.252 1.837 - 1.850: 20.2607% ( 602) 00:13:57.252 1.850 - 1.864: 21.8374% ( 254) 00:13:57.252 1.864 - 1.878: 29.4972% ( 1234) 00:13:57.252 1.878 - 1.892: 73.6685% ( 7116) 00:13:57.252 1.892 - 1.906: 91.3966% ( 2856) 00:13:57.252 1.906 - 1.920: 95.7666% ( 704) 00:13:57.252 1.920 - 1.934: 96.9584% ( 192) 00:13:57.252 1.934 - 1.948: 97.5109% ( 89) 00:13:57.252 1.948 - 1.962: 98.3178% ( 130) 00:13:57.252 1.962 - 1.976: 98.9820% ( 107) 00:13:57.252 1.976 - 1.990: 99.2117% ( 37) 00:13:57.252 1.990 - 2.003: 99.2551% ( 7) 00:13:57.252 2.003 - 2.017: 99.2737% ( 3) 00:13:57.252 2.017 - 2.031: 99.2924% ( 3) 00:13:57.252 2.059 - 2.073: 99.3048% ( 2) 00:13:57.252 2.073 - 2.087: 99.3110% ( 1) 00:13:57.252 3.840 - 3.868: 99.3172% ( 1) 00:13:57.252 3.896 - 3.923: 99.3234% ( 1) 00:13:57.252 3.923 - 3.951: 99.3296% ( 1) 00:13:57.252 3.951 - 3.979: 99.3420% ( 2) 00:13:57.252 3.979 - 4.007: 99.3482% ( 1) 00:13:57.252 4.202 - 4.230: 99.3669% ( 3) 00:13:57.252 4.397 - 4.424: 99.3731% ( 1) 00:13:57.252 4.480 - 4.508: 99.3793% ( 1) 00:13:57.252 4.563 - 4.591: 99.3855% ( 1) 00:13:57.252 4.591 - 4.619: 99.3917% ( 1) 00:13:57.252 4.647 - 4.675: 99.3979% ( 1) 00:13:57.252 4.786 - 4.814: 99.4041% ( 1) 00:13:57.252 4.842 - 4.870: 99.4103% ( 1) 00:13:57.252 4.925 - 4.953: 99.4165% ( 1) 00:13:57.252 4.981 - 5.009: 99.4227% ( 1) 00:13:57.252 5.176 - 5.203: 99.4289% ( 1) 00:13:57.252 5.343 - 5.370: 99.4351% ( 1) 00:13:57.252 5.621 - 5.649: 99.4413% ( 1) 00:13:57.252 6.010 - 6.038: 99.4475% ( 1) 00:13:57.252 6.150 - 6.177: 99.4538% ( 1) 00:13:57.252 6.233 - 6.261: 99.4600% ( 1) 00:13:57.252 6.456 - 6.483: 99.4662% ( 1) 00:13:57.252 6.650 - 6.678: 99.4724% ( 1) 00:13:57.252 6.762 - 6.790: 99.4786% ( 1) 00:13:57.252 6.901 - 6.929: 99.4848% ( 1) 00:13:57.252 6.957 - 6.984: 99.4910% ( 1) 00:13:57.252 7.513 - 7.569: 99.4972% ( 1) 00:13:57.252 8.292 - 8.348: 99.5034% ( 1) 00:13:57.252 9.350 - 9.405: 99.5096% ( 1) 00:13:57.252 50.310 - 50.532: 99.5158% ( 1) 00:13:57.252 3533.245 - 3547.492: 99.5220% ( 1) 00:13:57.252 3989.148 - 4017.642: 100.0000% ( 77) 00:13:57.252 00:13:57.252 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:57.252 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:57.252 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:57.252 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:57.252 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:57.511 [ 00:13:57.511 { 00:13:57.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:57.511 "subtype": "Discovery", 00:13:57.511 "listen_addresses": [], 00:13:57.511 "allow_any_host": true, 00:13:57.511 "hosts": [] 00:13:57.511 }, 00:13:57.511 { 00:13:57.511 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:57.511 "subtype": "NVMe", 00:13:57.511 "listen_addresses": [ 00:13:57.511 { 00:13:57.511 "trtype": "VFIOUSER", 00:13:57.511 "adrfam": "IPv4", 00:13:57.511 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:57.511 "trsvcid": "0" 00:13:57.511 } 00:13:57.511 ], 00:13:57.511 "allow_any_host": true, 00:13:57.511 "hosts": [], 00:13:57.511 "serial_number": "SPDK1", 00:13:57.511 "model_number": "SPDK bdev Controller", 00:13:57.511 "max_namespaces": 32, 00:13:57.511 "min_cntlid": 1, 00:13:57.511 "max_cntlid": 65519, 00:13:57.511 "namespaces": [ 00:13:57.511 { 00:13:57.511 "nsid": 1, 00:13:57.511 "bdev_name": "Malloc1", 00:13:57.511 "name": "Malloc1", 00:13:57.511 "nguid": "3B295A2E9FC14F02B9AE0DC19A410B48", 00:13:57.511 "uuid": "3b295a2e-9fc1-4f02-b9ae-0dc19a410b48" 00:13:57.511 }, 00:13:57.511 { 00:13:57.511 "nsid": 2, 00:13:57.511 "bdev_name": "Malloc3", 00:13:57.511 "name": "Malloc3", 00:13:57.511 "nguid": "242510488E8E4C91A6DD862B772C5E4F", 00:13:57.511 "uuid": "24251048-8e8e-4c91-a6dd-862b772c5e4f" 00:13:57.511 } 00:13:57.511 ] 00:13:57.511 }, 00:13:57.511 { 00:13:57.511 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:57.511 "subtype": "NVMe", 00:13:57.511 "listen_addresses": [ 00:13:57.511 { 00:13:57.511 "trtype": "VFIOUSER", 00:13:57.511 "adrfam": "IPv4", 00:13:57.511 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:57.511 "trsvcid": "0" 00:13:57.511 } 00:13:57.511 ], 00:13:57.511 "allow_any_host": true, 00:13:57.511 "hosts": [], 00:13:57.511 "serial_number": "SPDK2", 00:13:57.511 "model_number": "SPDK bdev Controller", 00:13:57.511 "max_namespaces": 32, 00:13:57.511 "min_cntlid": 1, 00:13:57.511 "max_cntlid": 65519, 00:13:57.511 "namespaces": [ 00:13:57.511 { 00:13:57.511 "nsid": 1, 00:13:57.511 "bdev_name": "Malloc2", 00:13:57.511 "name": "Malloc2", 00:13:57.511 "nguid": "045B37828EBA4F7CBDAD8A1E56FF30AC", 00:13:57.511 "uuid": "045b3782-8eba-4f7c-bdad-8a1e56ff30ac" 00:13:57.511 } 00:13:57.511 ] 00:13:57.511 } 00:13:57.511 ] 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3424818 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:57.511 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:57.769 [2024-11-19 17:31:59.825590] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:57.769 Malloc4 00:13:57.769 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:58.028 [2024-11-19 17:32:00.062403] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.028 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:58.028 Asynchronous Event Request test 00:13:58.028 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.028 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.028 Registering asynchronous event callbacks... 00:13:58.028 Starting namespace attribute notice tests for all controllers... 00:13:58.028 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:58.028 aer_cb - Changed Namespace 00:13:58.028 Cleaning up... 00:13:58.287 [ 00:13:58.287 { 00:13:58.287 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:58.287 "subtype": "Discovery", 00:13:58.287 "listen_addresses": [], 00:13:58.287 "allow_any_host": true, 00:13:58.287 "hosts": [] 00:13:58.287 }, 00:13:58.287 { 00:13:58.287 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:58.287 "subtype": "NVMe", 00:13:58.287 "listen_addresses": [ 00:13:58.287 { 00:13:58.287 "trtype": "VFIOUSER", 00:13:58.287 "adrfam": "IPv4", 00:13:58.287 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:58.287 "trsvcid": "0" 00:13:58.287 } 00:13:58.287 ], 00:13:58.287 "allow_any_host": true, 00:13:58.287 "hosts": [], 00:13:58.287 "serial_number": "SPDK1", 00:13:58.287 "model_number": "SPDK bdev Controller", 00:13:58.287 "max_namespaces": 32, 00:13:58.287 "min_cntlid": 1, 00:13:58.287 "max_cntlid": 65519, 00:13:58.287 "namespaces": [ 00:13:58.287 { 00:13:58.287 "nsid": 1, 00:13:58.287 "bdev_name": "Malloc1", 00:13:58.287 "name": "Malloc1", 00:13:58.287 "nguid": "3B295A2E9FC14F02B9AE0DC19A410B48", 00:13:58.287 "uuid": "3b295a2e-9fc1-4f02-b9ae-0dc19a410b48" 00:13:58.287 }, 00:13:58.287 { 00:13:58.287 "nsid": 2, 00:13:58.287 "bdev_name": "Malloc3", 00:13:58.287 "name": "Malloc3", 00:13:58.287 "nguid": "242510488E8E4C91A6DD862B772C5E4F", 00:13:58.287 "uuid": "24251048-8e8e-4c91-a6dd-862b772c5e4f" 00:13:58.287 } 00:13:58.287 ] 00:13:58.287 }, 00:13:58.287 { 00:13:58.287 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:58.287 "subtype": "NVMe", 00:13:58.287 "listen_addresses": [ 00:13:58.287 { 00:13:58.287 "trtype": "VFIOUSER", 00:13:58.287 "adrfam": "IPv4", 00:13:58.287 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:58.287 "trsvcid": "0" 00:13:58.287 } 00:13:58.287 ], 00:13:58.287 "allow_any_host": true, 00:13:58.287 "hosts": [], 00:13:58.287 "serial_number": "SPDK2", 00:13:58.288 "model_number": "SPDK bdev Controller", 00:13:58.288 "max_namespaces": 32, 00:13:58.288 "min_cntlid": 1, 00:13:58.288 "max_cntlid": 65519, 00:13:58.288 "namespaces": [ 00:13:58.288 { 00:13:58.288 "nsid": 1, 00:13:58.288 "bdev_name": "Malloc2", 00:13:58.288 "name": "Malloc2", 00:13:58.288 "nguid": "045B37828EBA4F7CBDAD8A1E56FF30AC", 00:13:58.288 "uuid": "045b3782-8eba-4f7c-bdad-8a1e56ff30ac" 00:13:58.288 }, 00:13:58.288 { 00:13:58.288 "nsid": 2, 00:13:58.288 "bdev_name": "Malloc4", 00:13:58.288 "name": "Malloc4", 00:13:58.288 "nguid": "9A6C227A3D734516BF018730B554E3A0", 00:13:58.288 "uuid": "9a6c227a-3d73-4516-bf01-8730b554e3a0" 00:13:58.288 } 00:13:58.288 ] 00:13:58.288 } 00:13:58.288 ] 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3424818 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3417185 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3417185 ']' 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3417185 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3417185 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3417185' 00:13:58.288 killing process with pid 3417185 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3417185 00:13:58.288 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3417185 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3425054 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3425054' 00:13:58.547 Process pid: 3425054 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3425054 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3425054 ']' 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.547 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:58.547 [2024-11-19 17:32:00.644934] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:58.547 [2024-11-19 17:32:00.645798] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:13:58.547 [2024-11-19 17:32:00.645839] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.547 [2024-11-19 17:32:00.718943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.547 [2024-11-19 17:32:00.756222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.547 [2024-11-19 17:32:00.756263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.547 [2024-11-19 17:32:00.756270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.547 [2024-11-19 17:32:00.756277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.547 [2024-11-19 17:32:00.756281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.547 [2024-11-19 17:32:00.757897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.547 [2024-11-19 17:32:00.758060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.547 [2024-11-19 17:32:00.758018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.547 [2024-11-19 17:32:00.758061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.807 [2024-11-19 17:32:00.826233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:58.807 [2024-11-19 17:32:00.826641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:58.807 [2024-11-19 17:32:00.827106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:58.807 [2024-11-19 17:32:00.827431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:58.807 [2024-11-19 17:32:00.827487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:58.807 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.807 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:58.807 17:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:59.744 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:00.002 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:00.002 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:00.002 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.002 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:00.002 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.262 Malloc1 00:14:00.262 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:00.521 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:00.521 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:00.780 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.780 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:00.780 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.040 Malloc2 00:14:01.040 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:01.299 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:01.299 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3425054 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3425054 ']' 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3425054 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3425054 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3425054' 00:14:01.558 killing process with pid 3425054 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3425054 00:14:01.558 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3425054 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.817 00:14:01.817 real 0m50.876s 00:14:01.817 user 3m16.901s 00:14:01.817 sys 0m3.194s 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:01.817 ************************************ 00:14:01.817 END TEST nvmf_vfio_user 00:14:01.817 ************************************ 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.817 17:32:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.817 ************************************ 00:14:01.817 START TEST nvmf_vfio_user_nvme_compliance 00:14:01.817 ************************************ 00:14:01.817 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:02.078 * Looking for test storage... 00:14:02.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:02.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.078 --rc genhtml_branch_coverage=1 00:14:02.078 --rc genhtml_function_coverage=1 00:14:02.078 --rc genhtml_legend=1 00:14:02.078 --rc geninfo_all_blocks=1 00:14:02.078 --rc geninfo_unexecuted_blocks=1 00:14:02.078 00:14:02.078 ' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:02.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.078 --rc genhtml_branch_coverage=1 00:14:02.078 --rc genhtml_function_coverage=1 00:14:02.078 --rc genhtml_legend=1 00:14:02.078 --rc geninfo_all_blocks=1 00:14:02.078 --rc geninfo_unexecuted_blocks=1 00:14:02.078 00:14:02.078 ' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:02.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.078 --rc genhtml_branch_coverage=1 00:14:02.078 --rc genhtml_function_coverage=1 00:14:02.078 --rc genhtml_legend=1 00:14:02.078 --rc geninfo_all_blocks=1 00:14:02.078 --rc geninfo_unexecuted_blocks=1 00:14:02.078 00:14:02.078 ' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:02.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.078 --rc genhtml_branch_coverage=1 00:14:02.078 --rc genhtml_function_coverage=1 00:14:02.078 --rc genhtml_legend=1 00:14:02.078 --rc geninfo_all_blocks=1 00:14:02.078 --rc geninfo_unexecuted_blocks=1 00:14:02.078 00:14:02.078 ' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.078 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3425718 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3425718' 00:14:02.079 Process pid: 3425718 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3425718 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3425718 ']' 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.079 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.079 [2024-11-19 17:32:04.280716] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:14:02.079 [2024-11-19 17:32:04.280767] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.339 [2024-11-19 17:32:04.357093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.339 [2024-11-19 17:32:04.398926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.339 [2024-11-19 17:32:04.398967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.339 [2024-11-19 17:32:04.398975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.339 [2024-11-19 17:32:04.398981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.339 [2024-11-19 17:32:04.398986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.339 [2024-11-19 17:32:04.400328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.339 [2024-11-19 17:32:04.400442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.339 [2024-11-19 17:32:04.400442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.339 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.339 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:02.339 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.717 malloc0 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.717 17:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:03.717 00:14:03.717 00:14:03.717 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.717 http://cunit.sourceforge.net/ 00:14:03.717 00:14:03.717 00:14:03.717 Suite: nvme_compliance 00:14:03.717 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 17:32:05.732392] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.718 [2024-11-19 17:32:05.733737] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:03.718 [2024-11-19 17:32:05.733751] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:03.718 [2024-11-19 17:32:05.733757] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:03.718 [2024-11-19 17:32:05.735411] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.718 passed 00:14:03.718 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 17:32:05.813979] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.718 [2024-11-19 17:32:05.816996] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.718 passed 00:14:03.718 Test: admin_identify_ns ...[2024-11-19 17:32:05.896471] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.977 [2024-11-19 17:32:05.959959] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:03.977 [2024-11-19 17:32:05.967958] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:03.977 [2024-11-19 17:32:05.989052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.977 passed 00:14:03.977 Test: admin_get_features_mandatory_features ...[2024-11-19 17:32:06.062075] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.977 [2024-11-19 17:32:06.065086] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.977 passed 00:14:03.977 Test: admin_get_features_optional_features ...[2024-11-19 17:32:06.143601] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.977 [2024-11-19 17:32:06.146620] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.977 passed 00:14:04.236 Test: admin_set_features_number_of_queues ...[2024-11-19 17:32:06.224431] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.236 [2024-11-19 17:32:06.330043] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.236 passed 00:14:04.236 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 17:32:06.405208] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.236 [2024-11-19 17:32:06.408236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.236 passed 00:14:04.495 Test: admin_get_log_page_with_lpo ...[2024-11-19 17:32:06.480337] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.495 [2024-11-19 17:32:06.551955] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:04.495 [2024-11-19 17:32:06.565010] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.495 passed 00:14:04.495 Test: fabric_property_get ...[2024-11-19 17:32:06.641994] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.495 [2024-11-19 17:32:06.643242] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:04.495 [2024-11-19 17:32:06.645020] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.495 passed 00:14:04.754 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 17:32:06.721527] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.754 [2024-11-19 17:32:06.722777] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:04.754 [2024-11-19 17:32:06.724552] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.754 passed 00:14:04.754 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 17:32:06.803304] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.754 [2024-11-19 17:32:06.887956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.754 [2024-11-19 17:32:06.903957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.754 [2024-11-19 17:32:06.909035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.754 passed 00:14:05.013 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 17:32:06.983279] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.013 [2024-11-19 17:32:06.984532] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:05.013 [2024-11-19 17:32:06.986303] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.013 passed 00:14:05.013 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 17:32:07.062296] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.013 [2024-11-19 17:32:07.141957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:05.013 [2024-11-19 17:32:07.165959] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:05.013 [2024-11-19 17:32:07.171033] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.013 passed 00:14:05.273 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 17:32:07.245140] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.273 [2024-11-19 17:32:07.246374] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:05.273 [2024-11-19 17:32:07.246398] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:05.273 [2024-11-19 17:32:07.248165] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.273 passed 00:14:05.273 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 17:32:07.325061] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.273 [2024-11-19 17:32:07.418958] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:05.273 [2024-11-19 17:32:07.426953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:05.273 [2024-11-19 17:32:07.434962] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:05.273 [2024-11-19 17:32:07.442958] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:05.273 [2024-11-19 17:32:07.472032] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.532 passed 00:14:05.532 Test: admin_create_io_sq_verify_pc ...[2024-11-19 17:32:07.550134] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.532 [2024-11-19 17:32:07.564964] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:05.532 [2024-11-19 17:32:07.582306] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.532 passed 00:14:05.532 Test: admin_create_io_qp_max_qps ...[2024-11-19 17:32:07.659838] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.910 [2024-11-19 17:32:08.759957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:07.170 [2024-11-19 17:32:09.137446] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.170 passed 00:14:07.170 Test: admin_create_io_sq_shared_cq ...[2024-11-19 17:32:09.212354] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.170 [2024-11-19 17:32:09.347954] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:07.170 [2024-11-19 17:32:09.385026] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.430 passed 00:14:07.430 00:14:07.430 Run Summary: Type Total Ran Passed Failed Inactive 00:14:07.430 suites 1 1 n/a 0 0 00:14:07.430 tests 18 18 18 0 0 00:14:07.430 asserts 360 360 360 0 n/a 00:14:07.430 00:14:07.430 Elapsed time = 1.503 seconds 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3425718 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3425718 ']' 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3425718 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3425718 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3425718' 00:14:07.430 killing process with pid 3425718 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3425718 00:14:07.430 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3425718 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:07.689 00:14:07.689 real 0m5.640s 00:14:07.689 user 0m15.703s 00:14:07.689 sys 0m0.531s 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:07.689 ************************************ 00:14:07.689 END TEST nvmf_vfio_user_nvme_compliance 00:14:07.689 ************************************ 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.689 ************************************ 00:14:07.689 START TEST nvmf_vfio_user_fuzz 00:14:07.689 ************************************ 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.689 * Looking for test storage... 00:14:07.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.689 --rc genhtml_branch_coverage=1 00:14:07.689 --rc genhtml_function_coverage=1 00:14:07.689 --rc genhtml_legend=1 00:14:07.689 --rc geninfo_all_blocks=1 00:14:07.689 --rc geninfo_unexecuted_blocks=1 00:14:07.689 00:14:07.689 ' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.689 --rc genhtml_branch_coverage=1 00:14:07.689 --rc genhtml_function_coverage=1 00:14:07.689 --rc genhtml_legend=1 00:14:07.689 --rc geninfo_all_blocks=1 00:14:07.689 --rc geninfo_unexecuted_blocks=1 00:14:07.689 00:14:07.689 ' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.689 --rc genhtml_branch_coverage=1 00:14:07.689 --rc genhtml_function_coverage=1 00:14:07.689 --rc genhtml_legend=1 00:14:07.689 --rc geninfo_all_blocks=1 00:14:07.689 --rc geninfo_unexecuted_blocks=1 00:14:07.689 00:14:07.689 ' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.689 --rc genhtml_branch_coverage=1 00:14:07.689 --rc genhtml_function_coverage=1 00:14:07.689 --rc genhtml_legend=1 00:14:07.689 --rc geninfo_all_blocks=1 00:14:07.689 --rc geninfo_unexecuted_blocks=1 00:14:07.689 00:14:07.689 ' 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.689 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.948 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3426764 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3426764' 00:14:07.949 Process pid: 3426764 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3426764 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3426764 ']' 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.949 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.208 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.208 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:08.208 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.147 malloc0 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:09.147 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:41.366 Fuzzing completed. Shutting down the fuzz application 00:14:41.366 00:14:41.366 Dumping successful admin opcodes: 00:14:41.366 8, 9, 10, 24, 00:14:41.366 Dumping successful io opcodes: 00:14:41.366 0, 00:14:41.366 NS: 0x20000081ef00 I/O qp, Total commands completed: 1016540, total successful commands: 3991, random_seed: 1309622592 00:14:41.366 NS: 0x20000081ef00 admin qp, Total commands completed: 250583, total successful commands: 2026, random_seed: 1949538816 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3426764 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3426764 ']' 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3426764 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3426764 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3426764' 00:14:41.366 killing process with pid 3426764 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3426764 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3426764 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:41.366 00:14:41.366 real 0m32.223s 00:14:41.366 user 0m29.585s 00:14:41.366 sys 0m32.081s 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.366 ************************************ 00:14:41.366 END TEST nvmf_vfio_user_fuzz 00:14:41.366 ************************************ 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.366 17:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.366 ************************************ 00:14:41.366 START TEST nvmf_auth_target 00:14:41.366 ************************************ 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:41.366 * Looking for test storage... 00:14:41.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:41.366 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:41.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.367 --rc genhtml_branch_coverage=1 00:14:41.367 --rc genhtml_function_coverage=1 00:14:41.367 --rc genhtml_legend=1 00:14:41.367 --rc geninfo_all_blocks=1 00:14:41.367 --rc geninfo_unexecuted_blocks=1 00:14:41.367 00:14:41.367 ' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:41.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.367 --rc genhtml_branch_coverage=1 00:14:41.367 --rc genhtml_function_coverage=1 00:14:41.367 --rc genhtml_legend=1 00:14:41.367 --rc geninfo_all_blocks=1 00:14:41.367 --rc geninfo_unexecuted_blocks=1 00:14:41.367 00:14:41.367 ' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:41.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.367 --rc genhtml_branch_coverage=1 00:14:41.367 --rc genhtml_function_coverage=1 00:14:41.367 --rc genhtml_legend=1 00:14:41.367 --rc geninfo_all_blocks=1 00:14:41.367 --rc geninfo_unexecuted_blocks=1 00:14:41.367 00:14:41.367 ' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:41.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.367 --rc genhtml_branch_coverage=1 00:14:41.367 --rc genhtml_function_coverage=1 00:14:41.367 --rc genhtml_legend=1 00:14:41.367 --rc geninfo_all_blocks=1 00:14:41.367 --rc geninfo_unexecuted_blocks=1 00:14:41.367 00:14:41.367 ' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.367 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:46.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:46.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:46.651 Found net devices under 0000:86:00.0: cvl_0_0 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:46.651 Found net devices under 0000:86:00.1: cvl_0_1 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.651 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.651 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:46.651 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.651 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:46.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:14:46.652 00:14:46.652 --- 10.0.0.2 ping statistics --- 00:14:46.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.652 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:46.652 00:14:46.652 --- 10.0.0.1 ping statistics --- 00:14:46.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.652 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3435123 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3435123 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3435123 ']' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3435151 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1a787dbeb75f9cb588b8ebfe752003a0359f39149affd75d 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zgX 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1a787dbeb75f9cb588b8ebfe752003a0359f39149affd75d 0 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1a787dbeb75f9cb588b8ebfe752003a0359f39149affd75d 0 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1a787dbeb75f9cb588b8ebfe752003a0359f39149affd75d 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zgX 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zgX 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.zgX 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bdecf5f30c3b5a465bd3fcaac86e6a26cf7189d43d857b6d98d6977df4e329b9 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.v6F 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bdecf5f30c3b5a465bd3fcaac86e6a26cf7189d43d857b6d98d6977df4e329b9 3 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bdecf5f30c3b5a465bd3fcaac86e6a26cf7189d43d857b6d98d6977df4e329b9 3 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bdecf5f30c3b5a465bd3fcaac86e6a26cf7189d43d857b6d98d6977df4e329b9 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.v6F 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.v6F 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.v6F 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:46.652 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a3ea55039fb128286a4a1b020f9d43fd 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sFy 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a3ea55039fb128286a4a1b020f9d43fd 1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a3ea55039fb128286a4a1b020f9d43fd 1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a3ea55039fb128286a4a1b020f9d43fd 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sFy 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sFy 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.sFy 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b6c2bf9cadefaaa6bfb9e2411cc25774f660efd5f1ff984 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RWA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b6c2bf9cadefaaa6bfb9e2411cc25774f660efd5f1ff984 2 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b6c2bf9cadefaaa6bfb9e2411cc25774f660efd5f1ff984 2 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b6c2bf9cadefaaa6bfb9e2411cc25774f660efd5f1ff984 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RWA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RWA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.RWA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=200fb118e0ac0dde41a7c9b28cc844a5890ce36b9a2daa27 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.McA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 200fb118e0ac0dde41a7c9b28cc844a5890ce36b9a2daa27 2 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 200fb118e0ac0dde41a7c9b28cc844a5890ce36b9a2daa27 2 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=200fb118e0ac0dde41a7c9b28cc844a5890ce36b9a2daa27 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.McA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.McA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.McA 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e69b1f95bea298e8003f835af7a73c5e 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3w4 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e69b1f95bea298e8003f835af7a73c5e 1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e69b1f95bea298e8003f835af7a73c5e 1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e69b1f95bea298e8003f835af7a73c5e 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3w4 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3w4 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.3w4 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b876b8eee54b85e916ea373a4028224acfc332ad08942c803bfca85be963bf2b 00:14:46.653 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QZO 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b876b8eee54b85e916ea373a4028224acfc332ad08942c803bfca85be963bf2b 3 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b876b8eee54b85e916ea373a4028224acfc332ad08942c803bfca85be963bf2b 3 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b876b8eee54b85e916ea373a4028224acfc332ad08942c803bfca85be963bf2b 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:46.654 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QZO 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QZO 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.QZO 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3435123 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3435123 ']' 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.913 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3435151 /var/tmp/host.sock 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3435151 ']' 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:46.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.913 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.174 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.174 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:47.174 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zgX 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zgX 00:14:47.175 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zgX 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.v6F ]] 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6F 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6F 00:14:47.439 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6F 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sFy 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sFy 00:14:47.698 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sFy 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.RWA ]] 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWA 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWA 00:14:47.957 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWA 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.McA 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.McA 00:14:47.957 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.McA 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.3w4 ]] 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3w4 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3w4 00:14:48.216 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3w4 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QZO 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.QZO 00:14:48.475 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.QZO 00:14:48.734 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:48.734 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:48.734 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.734 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.734 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:48.734 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.994 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.994 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.254 { 00:14:49.254 "cntlid": 1, 00:14:49.254 "qid": 0, 00:14:49.254 "state": "enabled", 00:14:49.254 "thread": "nvmf_tgt_poll_group_000", 00:14:49.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:49.254 "listen_address": { 00:14:49.254 "trtype": "TCP", 00:14:49.254 "adrfam": "IPv4", 00:14:49.254 "traddr": "10.0.0.2", 00:14:49.254 "trsvcid": "4420" 00:14:49.254 }, 00:14:49.254 "peer_address": { 00:14:49.254 "trtype": "TCP", 00:14:49.254 "adrfam": "IPv4", 00:14:49.254 "traddr": "10.0.0.1", 00:14:49.254 "trsvcid": "48248" 00:14:49.254 }, 00:14:49.254 "auth": { 00:14:49.254 "state": "completed", 00:14:49.254 "digest": "sha256", 00:14:49.254 "dhgroup": "null" 00:14:49.254 } 00:14:49.254 } 00:14:49.254 ]' 00:14:49.254 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.512 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.770 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:14:49.770 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.337 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.596 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.855 00:14:50.855 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.855 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.855 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.855 { 00:14:50.855 "cntlid": 3, 00:14:50.855 "qid": 0, 00:14:50.855 "state": "enabled", 00:14:50.855 "thread": "nvmf_tgt_poll_group_000", 00:14:50.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:50.855 "listen_address": { 00:14:50.855 "trtype": "TCP", 00:14:50.855 "adrfam": "IPv4", 00:14:50.855 "traddr": "10.0.0.2", 00:14:50.855 "trsvcid": "4420" 00:14:50.855 }, 00:14:50.855 "peer_address": { 00:14:50.855 "trtype": "TCP", 00:14:50.855 "adrfam": "IPv4", 00:14:50.855 "traddr": "10.0.0.1", 00:14:50.855 "trsvcid": "48274" 00:14:50.855 }, 00:14:50.855 "auth": { 00:14:50.855 "state": "completed", 00:14:50.855 "digest": "sha256", 00:14:50.855 "dhgroup": "null" 00:14:50.855 } 00:14:50.855 } 00:14:50.855 ]' 00:14:50.855 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.114 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.373 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:14:51.373 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.941 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.200 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.459 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.459 { 00:14:52.459 "cntlid": 5, 00:14:52.459 "qid": 0, 00:14:52.459 "state": "enabled", 00:14:52.459 "thread": "nvmf_tgt_poll_group_000", 00:14:52.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:52.459 "listen_address": { 00:14:52.459 "trtype": "TCP", 00:14:52.459 "adrfam": "IPv4", 00:14:52.459 "traddr": "10.0.0.2", 00:14:52.459 "trsvcid": "4420" 00:14:52.459 }, 00:14:52.459 "peer_address": { 00:14:52.459 "trtype": "TCP", 00:14:52.459 "adrfam": "IPv4", 00:14:52.459 "traddr": "10.0.0.1", 00:14:52.459 "trsvcid": "47434" 00:14:52.459 }, 00:14:52.459 "auth": { 00:14:52.459 "state": "completed", 00:14:52.459 "digest": "sha256", 00:14:52.459 "dhgroup": "null" 00:14:52.459 } 00:14:52.459 } 00:14:52.459 ]' 00:14:52.459 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.718 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.977 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:14:52.977 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.546 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.805 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.805 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.064 { 00:14:54.064 "cntlid": 7, 00:14:54.064 "qid": 0, 00:14:54.064 "state": "enabled", 00:14:54.064 "thread": "nvmf_tgt_poll_group_000", 00:14:54.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:54.064 "listen_address": { 00:14:54.064 "trtype": "TCP", 00:14:54.064 "adrfam": "IPv4", 00:14:54.064 "traddr": "10.0.0.2", 00:14:54.064 "trsvcid": "4420" 00:14:54.064 }, 00:14:54.064 "peer_address": { 00:14:54.064 "trtype": "TCP", 00:14:54.064 "adrfam": "IPv4", 00:14:54.064 "traddr": "10.0.0.1", 00:14:54.064 "trsvcid": "47456" 00:14:54.064 }, 00:14:54.064 "auth": { 00:14:54.064 "state": "completed", 00:14:54.064 "digest": "sha256", 00:14:54.064 "dhgroup": "null" 00:14:54.064 } 00:14:54.064 } 00:14:54.064 ]' 00:14:54.064 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.323 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.582 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:14:54.582 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:55.150 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.151 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.409 00:14:55.409 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.410 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.410 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.669 { 00:14:55.669 "cntlid": 9, 00:14:55.669 "qid": 0, 00:14:55.669 "state": "enabled", 00:14:55.669 "thread": "nvmf_tgt_poll_group_000", 00:14:55.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:55.669 "listen_address": { 00:14:55.669 "trtype": "TCP", 00:14:55.669 "adrfam": "IPv4", 00:14:55.669 "traddr": "10.0.0.2", 00:14:55.669 "trsvcid": "4420" 00:14:55.669 }, 00:14:55.669 "peer_address": { 00:14:55.669 "trtype": "TCP", 00:14:55.669 "adrfam": "IPv4", 00:14:55.669 "traddr": "10.0.0.1", 00:14:55.669 "trsvcid": "47468" 00:14:55.669 }, 00:14:55.669 "auth": { 00:14:55.669 "state": "completed", 00:14:55.669 "digest": "sha256", 00:14:55.669 "dhgroup": "ffdhe2048" 00:14:55.669 } 00:14:55.669 } 00:14:55.669 ]' 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.669 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.928 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.929 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.929 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.929 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.929 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.929 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:14:55.929 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:14:56.497 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.757 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.016 00:14:57.016 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.016 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.016 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.274 { 00:14:57.274 "cntlid": 11, 00:14:57.274 "qid": 0, 00:14:57.274 "state": "enabled", 00:14:57.274 "thread": "nvmf_tgt_poll_group_000", 00:14:57.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:57.274 "listen_address": { 00:14:57.274 "trtype": "TCP", 00:14:57.274 "adrfam": "IPv4", 00:14:57.274 "traddr": "10.0.0.2", 00:14:57.274 "trsvcid": "4420" 00:14:57.274 }, 00:14:57.274 "peer_address": { 00:14:57.274 "trtype": "TCP", 00:14:57.274 "adrfam": "IPv4", 00:14:57.274 "traddr": "10.0.0.1", 00:14:57.274 "trsvcid": "47498" 00:14:57.274 }, 00:14:57.274 "auth": { 00:14:57.274 "state": "completed", 00:14:57.274 "digest": "sha256", 00:14:57.274 "dhgroup": "ffdhe2048" 00:14:57.274 } 00:14:57.274 } 00:14:57.274 ]' 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.274 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.534 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.534 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.534 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.534 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.534 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.793 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:14:57.794 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.362 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.621 00:14:58.621 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.621 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.621 17:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.881 { 00:14:58.881 "cntlid": 13, 00:14:58.881 "qid": 0, 00:14:58.881 "state": "enabled", 00:14:58.881 "thread": "nvmf_tgt_poll_group_000", 00:14:58.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:58.881 "listen_address": { 00:14:58.881 "trtype": "TCP", 00:14:58.881 "adrfam": "IPv4", 00:14:58.881 "traddr": "10.0.0.2", 00:14:58.881 "trsvcid": "4420" 00:14:58.881 }, 00:14:58.881 "peer_address": { 00:14:58.881 "trtype": "TCP", 00:14:58.881 "adrfam": "IPv4", 00:14:58.881 "traddr": "10.0.0.1", 00:14:58.881 "trsvcid": "47512" 00:14:58.881 }, 00:14:58.881 "auth": { 00:14:58.881 "state": "completed", 00:14:58.881 "digest": "sha256", 00:14:58.881 "dhgroup": "ffdhe2048" 00:14:58.881 } 00:14:58.881 } 00:14:58.881 ]' 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.881 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:14:59.140 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:14:59.708 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.968 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.968 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.227 00:15:00.227 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.227 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.228 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.487 { 00:15:00.487 "cntlid": 15, 00:15:00.487 "qid": 0, 00:15:00.487 "state": "enabled", 00:15:00.487 "thread": "nvmf_tgt_poll_group_000", 00:15:00.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:00.487 "listen_address": { 00:15:00.487 "trtype": "TCP", 00:15:00.487 "adrfam": "IPv4", 00:15:00.487 "traddr": "10.0.0.2", 00:15:00.487 "trsvcid": "4420" 00:15:00.487 }, 00:15:00.487 "peer_address": { 00:15:00.487 "trtype": "TCP", 00:15:00.487 "adrfam": "IPv4", 00:15:00.487 "traddr": "10.0.0.1", 00:15:00.487 "trsvcid": "47548" 00:15:00.487 }, 00:15:00.487 "auth": { 00:15:00.487 "state": "completed", 00:15:00.487 "digest": "sha256", 00:15:00.487 "dhgroup": "ffdhe2048" 00:15:00.487 } 00:15:00.487 } 00:15:00.487 ]' 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.487 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.746 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.746 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.746 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.746 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:00.746 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.315 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.575 17:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.834 00:15:01.834 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.834 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.834 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.094 { 00:15:02.094 "cntlid": 17, 00:15:02.094 "qid": 0, 00:15:02.094 "state": "enabled", 00:15:02.094 "thread": "nvmf_tgt_poll_group_000", 00:15:02.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.094 "listen_address": { 00:15:02.094 "trtype": "TCP", 00:15:02.094 "adrfam": "IPv4", 00:15:02.094 "traddr": "10.0.0.2", 00:15:02.094 "trsvcid": "4420" 00:15:02.094 }, 00:15:02.094 "peer_address": { 00:15:02.094 "trtype": "TCP", 00:15:02.094 "adrfam": "IPv4", 00:15:02.094 "traddr": "10.0.0.1", 00:15:02.094 "trsvcid": "49096" 00:15:02.094 }, 00:15:02.094 "auth": { 00:15:02.094 "state": "completed", 00:15:02.094 "digest": "sha256", 00:15:02.094 "dhgroup": "ffdhe3072" 00:15:02.094 } 00:15:02.094 } 00:15:02.094 ]' 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.094 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.353 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.353 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.353 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.353 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:02.353 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:02.921 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.180 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.440 00:15:03.440 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.440 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.440 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.699 { 00:15:03.699 "cntlid": 19, 00:15:03.699 "qid": 0, 00:15:03.699 "state": "enabled", 00:15:03.699 "thread": "nvmf_tgt_poll_group_000", 00:15:03.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:03.699 "listen_address": { 00:15:03.699 "trtype": "TCP", 00:15:03.699 "adrfam": "IPv4", 00:15:03.699 "traddr": "10.0.0.2", 00:15:03.699 "trsvcid": "4420" 00:15:03.699 }, 00:15:03.699 "peer_address": { 00:15:03.699 "trtype": "TCP", 00:15:03.699 "adrfam": "IPv4", 00:15:03.699 "traddr": "10.0.0.1", 00:15:03.699 "trsvcid": "49120" 00:15:03.699 }, 00:15:03.699 "auth": { 00:15:03.699 "state": "completed", 00:15:03.699 "digest": "sha256", 00:15:03.699 "dhgroup": "ffdhe3072" 00:15:03.699 } 00:15:03.699 } 00:15:03.699 ]' 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:03.699 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.959 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.959 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.959 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.959 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:03.959 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.528 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.787 17:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.046 00:15:05.046 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.046 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.046 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.305 { 00:15:05.305 "cntlid": 21, 00:15:05.305 "qid": 0, 00:15:05.305 "state": "enabled", 00:15:05.305 "thread": "nvmf_tgt_poll_group_000", 00:15:05.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.305 "listen_address": { 00:15:05.305 "trtype": "TCP", 00:15:05.305 "adrfam": "IPv4", 00:15:05.305 "traddr": "10.0.0.2", 00:15:05.305 "trsvcid": "4420" 00:15:05.305 }, 00:15:05.305 "peer_address": { 00:15:05.305 "trtype": "TCP", 00:15:05.305 "adrfam": "IPv4", 00:15:05.305 "traddr": "10.0.0.1", 00:15:05.305 "trsvcid": "49146" 00:15:05.305 }, 00:15:05.305 "auth": { 00:15:05.305 "state": "completed", 00:15:05.305 "digest": "sha256", 00:15:05.305 "dhgroup": "ffdhe3072" 00:15:05.305 } 00:15:05.305 } 00:15:05.305 ]' 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.305 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.564 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:05.564 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.131 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.390 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.649 00:15:06.649 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.649 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.649 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.908 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.908 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.908 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.908 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.908 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.908 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.908 { 00:15:06.908 "cntlid": 23, 00:15:06.908 "qid": 0, 00:15:06.908 "state": "enabled", 00:15:06.908 "thread": "nvmf_tgt_poll_group_000", 00:15:06.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.908 "listen_address": { 00:15:06.909 "trtype": "TCP", 00:15:06.909 "adrfam": "IPv4", 00:15:06.909 "traddr": "10.0.0.2", 00:15:06.909 "trsvcid": "4420" 00:15:06.909 }, 00:15:06.909 "peer_address": { 00:15:06.909 "trtype": "TCP", 00:15:06.909 "adrfam": "IPv4", 00:15:06.909 "traddr": "10.0.0.1", 00:15:06.909 "trsvcid": "49180" 00:15:06.909 }, 00:15:06.909 "auth": { 00:15:06.909 "state": "completed", 00:15:06.909 "digest": "sha256", 00:15:06.909 "dhgroup": "ffdhe3072" 00:15:06.909 } 00:15:06.909 } 00:15:06.909 ]' 00:15:06.909 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.909 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.166 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:07.166 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.734 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.992 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.250 00:15:08.250 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.250 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.250 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.510 { 00:15:08.510 "cntlid": 25, 00:15:08.510 "qid": 0, 00:15:08.510 "state": "enabled", 00:15:08.510 "thread": "nvmf_tgt_poll_group_000", 00:15:08.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.510 "listen_address": { 00:15:08.510 "trtype": "TCP", 00:15:08.510 "adrfam": "IPv4", 00:15:08.510 "traddr": "10.0.0.2", 00:15:08.510 "trsvcid": "4420" 00:15:08.510 }, 00:15:08.510 "peer_address": { 00:15:08.510 "trtype": "TCP", 00:15:08.510 "adrfam": "IPv4", 00:15:08.510 "traddr": "10.0.0.1", 00:15:08.510 "trsvcid": "49204" 00:15:08.510 }, 00:15:08.510 "auth": { 00:15:08.510 "state": "completed", 00:15:08.510 "digest": "sha256", 00:15:08.510 "dhgroup": "ffdhe4096" 00:15:08.510 } 00:15:08.510 } 00:15:08.510 ]' 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.510 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.770 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:08.770 17:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:09.338 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.598 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.857 00:15:09.857 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.857 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.857 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.116 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.116 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.116 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.116 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.116 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.116 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.116 { 00:15:10.116 "cntlid": 27, 00:15:10.116 "qid": 0, 00:15:10.116 "state": "enabled", 00:15:10.116 "thread": "nvmf_tgt_poll_group_000", 00:15:10.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.116 "listen_address": { 00:15:10.116 "trtype": "TCP", 00:15:10.116 "adrfam": "IPv4", 00:15:10.116 "traddr": "10.0.0.2", 00:15:10.116 "trsvcid": "4420" 00:15:10.116 }, 00:15:10.116 "peer_address": { 00:15:10.116 "trtype": "TCP", 00:15:10.116 "adrfam": "IPv4", 00:15:10.116 "traddr": "10.0.0.1", 00:15:10.116 "trsvcid": "49238" 00:15:10.116 }, 00:15:10.116 "auth": { 00:15:10.116 "state": "completed", 00:15:10.116 "digest": "sha256", 00:15:10.116 "dhgroup": "ffdhe4096" 00:15:10.116 } 00:15:10.116 } 00:15:10.116 ]' 00:15:10.117 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.117 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.117 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.117 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.117 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.375 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.376 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.376 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.376 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:10.376 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:10.943 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.943 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.943 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.943 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.943 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.943 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.944 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.944 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.203 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.462 00:15:11.462 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.462 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.462 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.721 { 00:15:11.721 "cntlid": 29, 00:15:11.721 "qid": 0, 00:15:11.721 "state": "enabled", 00:15:11.721 "thread": "nvmf_tgt_poll_group_000", 00:15:11.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.721 "listen_address": { 00:15:11.721 "trtype": "TCP", 00:15:11.721 "adrfam": "IPv4", 00:15:11.721 "traddr": "10.0.0.2", 00:15:11.721 "trsvcid": "4420" 00:15:11.721 }, 00:15:11.721 "peer_address": { 00:15:11.721 "trtype": "TCP", 00:15:11.721 "adrfam": "IPv4", 00:15:11.721 "traddr": "10.0.0.1", 00:15:11.721 "trsvcid": "49260" 00:15:11.721 }, 00:15:11.721 "auth": { 00:15:11.721 "state": "completed", 00:15:11.721 "digest": "sha256", 00:15:11.721 "dhgroup": "ffdhe4096" 00:15:11.721 } 00:15:11.721 } 00:15:11.721 ]' 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.721 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.981 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.981 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.981 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.981 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:11.981 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:12.549 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.549 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.549 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.549 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.809 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.068 00:15:13.068 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.068 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.068 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.327 { 00:15:13.327 "cntlid": 31, 00:15:13.327 "qid": 0, 00:15:13.327 "state": "enabled", 00:15:13.327 "thread": "nvmf_tgt_poll_group_000", 00:15:13.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.327 "listen_address": { 00:15:13.327 "trtype": "TCP", 00:15:13.327 "adrfam": "IPv4", 00:15:13.327 "traddr": "10.0.0.2", 00:15:13.327 "trsvcid": "4420" 00:15:13.327 }, 00:15:13.327 "peer_address": { 00:15:13.327 "trtype": "TCP", 00:15:13.327 "adrfam": "IPv4", 00:15:13.327 "traddr": "10.0.0.1", 00:15:13.327 "trsvcid": "35430" 00:15:13.327 }, 00:15:13.327 "auth": { 00:15:13.327 "state": "completed", 00:15:13.327 "digest": "sha256", 00:15:13.327 "dhgroup": "ffdhe4096" 00:15:13.327 } 00:15:13.327 } 00:15:13.327 ]' 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.327 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.586 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.586 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.586 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.586 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.586 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.845 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:13.845 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.415 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.983 00:15:14.983 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.984 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.984 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.984 { 00:15:14.984 "cntlid": 33, 00:15:14.984 "qid": 0, 00:15:14.984 "state": "enabled", 00:15:14.984 "thread": "nvmf_tgt_poll_group_000", 00:15:14.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.984 "listen_address": { 00:15:14.984 "trtype": "TCP", 00:15:14.984 "adrfam": "IPv4", 00:15:14.984 "traddr": "10.0.0.2", 00:15:14.984 "trsvcid": "4420" 00:15:14.984 }, 00:15:14.984 "peer_address": { 00:15:14.984 "trtype": "TCP", 00:15:14.984 "adrfam": "IPv4", 00:15:14.984 "traddr": "10.0.0.1", 00:15:14.984 "trsvcid": "35456" 00:15:14.984 }, 00:15:14.984 "auth": { 00:15:14.984 "state": "completed", 00:15:14.984 "digest": "sha256", 00:15:14.984 "dhgroup": "ffdhe6144" 00:15:14.984 } 00:15:14.984 } 00:15:14.984 ]' 00:15:14.984 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.243 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.501 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:15.501 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:16.091 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.091 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.091 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.091 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.091 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.092 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.092 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:16.092 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.351 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.610 00:15:16.610 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.610 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.610 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.870 { 00:15:16.870 "cntlid": 35, 00:15:16.870 "qid": 0, 00:15:16.870 "state": "enabled", 00:15:16.870 "thread": "nvmf_tgt_poll_group_000", 00:15:16.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.870 "listen_address": { 00:15:16.870 "trtype": "TCP", 00:15:16.870 "adrfam": "IPv4", 00:15:16.870 "traddr": "10.0.0.2", 00:15:16.870 "trsvcid": "4420" 00:15:16.870 }, 00:15:16.870 "peer_address": { 00:15:16.870 "trtype": "TCP", 00:15:16.870 "adrfam": "IPv4", 00:15:16.870 "traddr": "10.0.0.1", 00:15:16.870 "trsvcid": "35486" 00:15:16.870 }, 00:15:16.870 "auth": { 00:15:16.870 "state": "completed", 00:15:16.870 "digest": "sha256", 00:15:16.870 "dhgroup": "ffdhe6144" 00:15:16.870 } 00:15:16.870 } 00:15:16.870 ]' 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.870 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.870 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.870 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.870 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.130 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:17.130 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.698 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.958 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.217 00:15:18.217 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.217 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.217 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.477 { 00:15:18.477 "cntlid": 37, 00:15:18.477 "qid": 0, 00:15:18.477 "state": "enabled", 00:15:18.477 "thread": "nvmf_tgt_poll_group_000", 00:15:18.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.477 "listen_address": { 00:15:18.477 "trtype": "TCP", 00:15:18.477 "adrfam": "IPv4", 00:15:18.477 "traddr": "10.0.0.2", 00:15:18.477 "trsvcid": "4420" 00:15:18.477 }, 00:15:18.477 "peer_address": { 00:15:18.477 "trtype": "TCP", 00:15:18.477 "adrfam": "IPv4", 00:15:18.477 "traddr": "10.0.0.1", 00:15:18.477 "trsvcid": "35506" 00:15:18.477 }, 00:15:18.477 "auth": { 00:15:18.477 "state": "completed", 00:15:18.477 "digest": "sha256", 00:15:18.477 "dhgroup": "ffdhe6144" 00:15:18.477 } 00:15:18.477 } 00:15:18.477 ]' 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.477 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:18.737 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:19.306 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.565 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.566 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.133 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.133 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.133 { 00:15:20.133 "cntlid": 39, 00:15:20.133 "qid": 0, 00:15:20.133 "state": "enabled", 00:15:20.133 "thread": "nvmf_tgt_poll_group_000", 00:15:20.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.133 "listen_address": { 00:15:20.133 "trtype": "TCP", 00:15:20.133 "adrfam": "IPv4", 00:15:20.133 "traddr": "10.0.0.2", 00:15:20.133 "trsvcid": "4420" 00:15:20.134 }, 00:15:20.134 "peer_address": { 00:15:20.134 "trtype": "TCP", 00:15:20.134 "adrfam": "IPv4", 00:15:20.134 "traddr": "10.0.0.1", 00:15:20.134 "trsvcid": "35526" 00:15:20.134 }, 00:15:20.134 "auth": { 00:15:20.134 "state": "completed", 00:15:20.134 "digest": "sha256", 00:15:20.134 "dhgroup": "ffdhe6144" 00:15:20.134 } 00:15:20.134 } 00:15:20.134 ]' 00:15:20.134 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.393 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.652 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:20.652 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.221 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.480 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.481 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.481 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.481 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.740 00:15:21.999 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.999 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.999 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.999 { 00:15:21.999 "cntlid": 41, 00:15:21.999 "qid": 0, 00:15:21.999 "state": "enabled", 00:15:21.999 "thread": "nvmf_tgt_poll_group_000", 00:15:21.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.999 "listen_address": { 00:15:21.999 "trtype": "TCP", 00:15:21.999 "adrfam": "IPv4", 00:15:21.999 "traddr": "10.0.0.2", 00:15:21.999 "trsvcid": "4420" 00:15:21.999 }, 00:15:21.999 "peer_address": { 00:15:21.999 "trtype": "TCP", 00:15:21.999 "adrfam": "IPv4", 00:15:21.999 "traddr": "10.0.0.1", 00:15:21.999 "trsvcid": "35550" 00:15:21.999 }, 00:15:21.999 "auth": { 00:15:21.999 "state": "completed", 00:15:21.999 "digest": "sha256", 00:15:21.999 "dhgroup": "ffdhe8192" 00:15:21.999 } 00:15:21.999 } 00:15:21.999 ]' 00:15:21.999 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.258 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.518 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:22.518 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.087 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.346 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.346 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.346 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.346 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.605 00:15:23.605 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.605 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.605 17:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.864 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.864 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.864 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.864 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.864 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.864 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.864 { 00:15:23.864 "cntlid": 43, 00:15:23.864 "qid": 0, 00:15:23.864 "state": "enabled", 00:15:23.864 "thread": "nvmf_tgt_poll_group_000", 00:15:23.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.864 "listen_address": { 00:15:23.864 "trtype": "TCP", 00:15:23.865 "adrfam": "IPv4", 00:15:23.865 "traddr": "10.0.0.2", 00:15:23.865 "trsvcid": "4420" 00:15:23.865 }, 00:15:23.865 "peer_address": { 00:15:23.865 "trtype": "TCP", 00:15:23.865 "adrfam": "IPv4", 00:15:23.865 "traddr": "10.0.0.1", 00:15:23.865 "trsvcid": "37580" 00:15:23.865 }, 00:15:23.865 "auth": { 00:15:23.865 "state": "completed", 00:15:23.865 "digest": "sha256", 00:15:23.865 "dhgroup": "ffdhe8192" 00:15:23.865 } 00:15:23.865 } 00:15:23.865 ]' 00:15:23.865 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.865 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.865 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:24.124 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:24.691 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.691 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.691 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.691 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.950 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.950 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.950 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.951 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.951 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.519 00:15:25.519 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.519 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.519 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.778 { 00:15:25.778 "cntlid": 45, 00:15:25.778 "qid": 0, 00:15:25.778 "state": "enabled", 00:15:25.778 "thread": "nvmf_tgt_poll_group_000", 00:15:25.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.778 "listen_address": { 00:15:25.778 "trtype": "TCP", 00:15:25.778 "adrfam": "IPv4", 00:15:25.778 "traddr": "10.0.0.2", 00:15:25.778 "trsvcid": "4420" 00:15:25.778 }, 00:15:25.778 "peer_address": { 00:15:25.778 "trtype": "TCP", 00:15:25.778 "adrfam": "IPv4", 00:15:25.778 "traddr": "10.0.0.1", 00:15:25.778 "trsvcid": "37600" 00:15:25.778 }, 00:15:25.778 "auth": { 00:15:25.778 "state": "completed", 00:15:25.778 "digest": "sha256", 00:15:25.778 "dhgroup": "ffdhe8192" 00:15:25.778 } 00:15:25.778 } 00:15:25.778 ]' 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.778 17:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.038 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:26.038 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.606 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.865 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:26.865 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.865 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.865 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.865 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.865 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.866 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.454 00:15:27.454 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.454 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.454 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.713 { 00:15:27.713 "cntlid": 47, 00:15:27.713 "qid": 0, 00:15:27.713 "state": "enabled", 00:15:27.713 "thread": "nvmf_tgt_poll_group_000", 00:15:27.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.713 "listen_address": { 00:15:27.713 "trtype": "TCP", 00:15:27.713 "adrfam": "IPv4", 00:15:27.713 "traddr": "10.0.0.2", 00:15:27.713 "trsvcid": "4420" 00:15:27.713 }, 00:15:27.713 "peer_address": { 00:15:27.713 "trtype": "TCP", 00:15:27.713 "adrfam": "IPv4", 00:15:27.713 "traddr": "10.0.0.1", 00:15:27.713 "trsvcid": "37616" 00:15:27.713 }, 00:15:27.713 "auth": { 00:15:27.713 "state": "completed", 00:15:27.713 "digest": "sha256", 00:15:27.713 "dhgroup": "ffdhe8192" 00:15:27.713 } 00:15:27.713 } 00:15:27.713 ]' 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.713 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.972 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:27.972 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.540 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.799 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.800 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.800 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.800 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.058 00:15:29.058 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.058 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.058 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.059 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.059 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.059 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.059 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.318 { 00:15:29.318 "cntlid": 49, 00:15:29.318 "qid": 0, 00:15:29.318 "state": "enabled", 00:15:29.318 "thread": "nvmf_tgt_poll_group_000", 00:15:29.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.318 "listen_address": { 00:15:29.318 "trtype": "TCP", 00:15:29.318 "adrfam": "IPv4", 00:15:29.318 "traddr": "10.0.0.2", 00:15:29.318 "trsvcid": "4420" 00:15:29.318 }, 00:15:29.318 "peer_address": { 00:15:29.318 "trtype": "TCP", 00:15:29.318 "adrfam": "IPv4", 00:15:29.318 "traddr": "10.0.0.1", 00:15:29.318 "trsvcid": "37638" 00:15:29.318 }, 00:15:29.318 "auth": { 00:15:29.318 "state": "completed", 00:15:29.318 "digest": "sha384", 00:15:29.318 "dhgroup": "null" 00:15:29.318 } 00:15:29.318 } 00:15:29.318 ]' 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.318 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.577 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:29.578 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.146 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.406 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.406 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.665 { 00:15:30.665 "cntlid": 51, 00:15:30.665 "qid": 0, 00:15:30.665 "state": "enabled", 00:15:30.665 "thread": "nvmf_tgt_poll_group_000", 00:15:30.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.665 "listen_address": { 00:15:30.665 "trtype": "TCP", 00:15:30.665 "adrfam": "IPv4", 00:15:30.665 "traddr": "10.0.0.2", 00:15:30.665 "trsvcid": "4420" 00:15:30.665 }, 00:15:30.665 "peer_address": { 00:15:30.665 "trtype": "TCP", 00:15:30.665 "adrfam": "IPv4", 00:15:30.665 "traddr": "10.0.0.1", 00:15:30.665 "trsvcid": "37666" 00:15:30.665 }, 00:15:30.665 "auth": { 00:15:30.665 "state": "completed", 00:15:30.665 "digest": "sha384", 00:15:30.665 "dhgroup": "null" 00:15:30.665 } 00:15:30.665 } 00:15:30.665 ]' 00:15:30.665 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.924 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.184 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:31.184 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.753 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.013 00:15:32.272 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.272 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.272 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.273 { 00:15:32.273 "cntlid": 53, 00:15:32.273 "qid": 0, 00:15:32.273 "state": "enabled", 00:15:32.273 "thread": "nvmf_tgt_poll_group_000", 00:15:32.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.273 "listen_address": { 00:15:32.273 "trtype": "TCP", 00:15:32.273 "adrfam": "IPv4", 00:15:32.273 "traddr": "10.0.0.2", 00:15:32.273 "trsvcid": "4420" 00:15:32.273 }, 00:15:32.273 "peer_address": { 00:15:32.273 "trtype": "TCP", 00:15:32.273 "adrfam": "IPv4", 00:15:32.273 "traddr": "10.0.0.1", 00:15:32.273 "trsvcid": "36778" 00:15:32.273 }, 00:15:32.273 "auth": { 00:15:32.273 "state": "completed", 00:15:32.273 "digest": "sha384", 00:15:32.273 "dhgroup": "null" 00:15:32.273 } 00:15:32.273 } 00:15:32.273 ]' 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.273 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.532 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.532 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.532 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.532 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.532 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.792 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:32.792 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.361 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.620 00:15:33.879 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.880 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.880 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.880 { 00:15:33.880 "cntlid": 55, 00:15:33.880 "qid": 0, 00:15:33.880 "state": "enabled", 00:15:33.880 "thread": "nvmf_tgt_poll_group_000", 00:15:33.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.880 "listen_address": { 00:15:33.880 "trtype": "TCP", 00:15:33.880 "adrfam": "IPv4", 00:15:33.880 "traddr": "10.0.0.2", 00:15:33.880 "trsvcid": "4420" 00:15:33.880 }, 00:15:33.880 "peer_address": { 00:15:33.880 "trtype": "TCP", 00:15:33.880 "adrfam": "IPv4", 00:15:33.880 "traddr": "10.0.0.1", 00:15:33.880 "trsvcid": "36816" 00:15:33.880 }, 00:15:33.880 "auth": { 00:15:33.880 "state": "completed", 00:15:33.880 "digest": "sha384", 00:15:33.880 "dhgroup": "null" 00:15:33.880 } 00:15:33.880 } 00:15:33.880 ]' 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.880 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.139 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.139 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.139 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.139 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.139 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.399 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:34.399 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.968 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.968 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.228 00:15:35.228 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.228 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.228 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.487 { 00:15:35.487 "cntlid": 57, 00:15:35.487 "qid": 0, 00:15:35.487 "state": "enabled", 00:15:35.487 "thread": "nvmf_tgt_poll_group_000", 00:15:35.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.487 "listen_address": { 00:15:35.487 "trtype": "TCP", 00:15:35.487 "adrfam": "IPv4", 00:15:35.487 "traddr": "10.0.0.2", 00:15:35.487 "trsvcid": "4420" 00:15:35.487 }, 00:15:35.487 "peer_address": { 00:15:35.487 "trtype": "TCP", 00:15:35.487 "adrfam": "IPv4", 00:15:35.487 "traddr": "10.0.0.1", 00:15:35.487 "trsvcid": "36850" 00:15:35.487 }, 00:15:35.487 "auth": { 00:15:35.487 "state": "completed", 00:15:35.487 "digest": "sha384", 00:15:35.487 "dhgroup": "ffdhe2048" 00:15:35.487 } 00:15:35.487 } 00:15:35.487 ]' 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.487 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:35.747 17:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:36.317 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.577 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.836 00:15:36.836 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.836 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.836 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.102 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.103 { 00:15:37.103 "cntlid": 59, 00:15:37.103 "qid": 0, 00:15:37.103 "state": "enabled", 00:15:37.103 "thread": "nvmf_tgt_poll_group_000", 00:15:37.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.103 "listen_address": { 00:15:37.103 "trtype": "TCP", 00:15:37.103 "adrfam": "IPv4", 00:15:37.103 "traddr": "10.0.0.2", 00:15:37.103 "trsvcid": "4420" 00:15:37.103 }, 00:15:37.103 "peer_address": { 00:15:37.103 "trtype": "TCP", 00:15:37.103 "adrfam": "IPv4", 00:15:37.103 "traddr": "10.0.0.1", 00:15:37.103 "trsvcid": "36888" 00:15:37.103 }, 00:15:37.103 "auth": { 00:15:37.103 "state": "completed", 00:15:37.103 "digest": "sha384", 00:15:37.103 "dhgroup": "ffdhe2048" 00:15:37.103 } 00:15:37.103 } 00:15:37.103 ]' 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.103 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:37.367 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.306 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.566 00:15:38.566 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.566 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.566 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.825 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.825 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.825 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.825 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.825 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.825 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.825 { 00:15:38.825 "cntlid": 61, 00:15:38.825 "qid": 0, 00:15:38.825 "state": "enabled", 00:15:38.825 "thread": "nvmf_tgt_poll_group_000", 00:15:38.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.825 "listen_address": { 00:15:38.825 "trtype": "TCP", 00:15:38.825 "adrfam": "IPv4", 00:15:38.825 "traddr": "10.0.0.2", 00:15:38.825 "trsvcid": "4420" 00:15:38.825 }, 00:15:38.825 "peer_address": { 00:15:38.825 "trtype": "TCP", 00:15:38.825 "adrfam": "IPv4", 00:15:38.825 "traddr": "10.0.0.1", 00:15:38.826 "trsvcid": "36916" 00:15:38.826 }, 00:15:38.826 "auth": { 00:15:38.826 "state": "completed", 00:15:38.826 "digest": "sha384", 00:15:38.826 "dhgroup": "ffdhe2048" 00:15:38.826 } 00:15:38.826 } 00:15:38.826 ]' 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:39.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:39.659 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.660 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.954 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.955 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.276 00:15:40.276 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.276 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.276 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.628 { 00:15:40.628 "cntlid": 63, 00:15:40.628 "qid": 0, 00:15:40.628 "state": "enabled", 00:15:40.628 "thread": "nvmf_tgt_poll_group_000", 00:15:40.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.628 "listen_address": { 00:15:40.628 "trtype": "TCP", 00:15:40.628 "adrfam": "IPv4", 00:15:40.628 "traddr": "10.0.0.2", 00:15:40.628 "trsvcid": "4420" 00:15:40.628 }, 00:15:40.628 "peer_address": { 00:15:40.628 "trtype": "TCP", 00:15:40.628 "adrfam": "IPv4", 00:15:40.628 "traddr": "10.0.0.1", 00:15:40.628 "trsvcid": "36944" 00:15:40.628 }, 00:15:40.628 "auth": { 00:15:40.628 "state": "completed", 00:15:40.628 "digest": "sha384", 00:15:40.628 "dhgroup": "ffdhe2048" 00:15:40.628 } 00:15:40.628 } 00:15:40.628 ]' 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:40.628 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.230 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.488 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.489 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.489 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.489 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.489 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.748 00:15:41.748 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.748 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.748 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.006 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.007 { 00:15:42.007 "cntlid": 65, 00:15:42.007 "qid": 0, 00:15:42.007 "state": "enabled", 00:15:42.007 "thread": "nvmf_tgt_poll_group_000", 00:15:42.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.007 "listen_address": { 00:15:42.007 "trtype": "TCP", 00:15:42.007 "adrfam": "IPv4", 00:15:42.007 "traddr": "10.0.0.2", 00:15:42.007 "trsvcid": "4420" 00:15:42.007 }, 00:15:42.007 "peer_address": { 00:15:42.007 "trtype": "TCP", 00:15:42.007 "adrfam": "IPv4", 00:15:42.007 "traddr": "10.0.0.1", 00:15:42.007 "trsvcid": "48702" 00:15:42.007 }, 00:15:42.007 "auth": { 00:15:42.007 "state": "completed", 00:15:42.007 "digest": "sha384", 00:15:42.007 "dhgroup": "ffdhe3072" 00:15:42.007 } 00:15:42.007 } 00:15:42.007 ]' 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.007 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.267 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:42.267 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:42.835 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.094 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.353 00:15:43.353 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.353 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.353 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.612 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.612 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.612 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.612 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.612 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.612 { 00:15:43.613 "cntlid": 67, 00:15:43.613 "qid": 0, 00:15:43.613 "state": "enabled", 00:15:43.613 "thread": "nvmf_tgt_poll_group_000", 00:15:43.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.613 "listen_address": { 00:15:43.613 "trtype": "TCP", 00:15:43.613 "adrfam": "IPv4", 00:15:43.613 "traddr": "10.0.0.2", 00:15:43.613 "trsvcid": "4420" 00:15:43.613 }, 00:15:43.613 "peer_address": { 00:15:43.613 "trtype": "TCP", 00:15:43.613 "adrfam": "IPv4", 00:15:43.613 "traddr": "10.0.0.1", 00:15:43.613 "trsvcid": "48732" 00:15:43.613 }, 00:15:43.613 "auth": { 00:15:43.613 "state": "completed", 00:15:43.613 "digest": "sha384", 00:15:43.613 "dhgroup": "ffdhe3072" 00:15:43.613 } 00:15:43.613 } 00:15:43.613 ]' 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.613 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.875 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:43.875 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:44.447 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.707 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.966 00:15:44.966 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.966 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.966 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.225 { 00:15:45.225 "cntlid": 69, 00:15:45.225 "qid": 0, 00:15:45.225 "state": "enabled", 00:15:45.225 "thread": "nvmf_tgt_poll_group_000", 00:15:45.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.225 "listen_address": { 00:15:45.225 "trtype": "TCP", 00:15:45.225 "adrfam": "IPv4", 00:15:45.225 "traddr": "10.0.0.2", 00:15:45.225 "trsvcid": "4420" 00:15:45.225 }, 00:15:45.225 "peer_address": { 00:15:45.225 "trtype": "TCP", 00:15:45.225 "adrfam": "IPv4", 00:15:45.225 "traddr": "10.0.0.1", 00:15:45.225 "trsvcid": "48756" 00:15:45.225 }, 00:15:45.225 "auth": { 00:15:45.225 "state": "completed", 00:15:45.225 "digest": "sha384", 00:15:45.225 "dhgroup": "ffdhe3072" 00:15:45.225 } 00:15:45.225 } 00:15:45.225 ]' 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.225 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.484 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:45.484 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.053 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.313 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.572 00:15:46.572 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.572 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.572 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.831 { 00:15:46.831 "cntlid": 71, 00:15:46.831 "qid": 0, 00:15:46.831 "state": "enabled", 00:15:46.831 "thread": "nvmf_tgt_poll_group_000", 00:15:46.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.831 "listen_address": { 00:15:46.831 "trtype": "TCP", 00:15:46.831 "adrfam": "IPv4", 00:15:46.831 "traddr": "10.0.0.2", 00:15:46.831 "trsvcid": "4420" 00:15:46.831 }, 00:15:46.831 "peer_address": { 00:15:46.831 "trtype": "TCP", 00:15:46.831 "adrfam": "IPv4", 00:15:46.831 "traddr": "10.0.0.1", 00:15:46.831 "trsvcid": "48780" 00:15:46.831 }, 00:15:46.831 "auth": { 00:15:46.831 "state": "completed", 00:15:46.831 "digest": "sha384", 00:15:46.831 "dhgroup": "ffdhe3072" 00:15:46.831 } 00:15:46.831 } 00:15:46.831 ]' 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.831 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.090 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:47.090 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:47.658 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.917 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.917 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.917 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.917 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.917 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.176 00:15:48.176 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.176 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.176 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.435 { 00:15:48.435 "cntlid": 73, 00:15:48.435 "qid": 0, 00:15:48.435 "state": "enabled", 00:15:48.435 "thread": "nvmf_tgt_poll_group_000", 00:15:48.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.435 "listen_address": { 00:15:48.435 "trtype": "TCP", 00:15:48.435 "adrfam": "IPv4", 00:15:48.435 "traddr": "10.0.0.2", 00:15:48.435 "trsvcid": "4420" 00:15:48.435 }, 00:15:48.435 "peer_address": { 00:15:48.435 "trtype": "TCP", 00:15:48.435 "adrfam": "IPv4", 00:15:48.435 "traddr": "10.0.0.1", 00:15:48.435 "trsvcid": "48814" 00:15:48.435 }, 00:15:48.435 "auth": { 00:15:48.435 "state": "completed", 00:15:48.435 "digest": "sha384", 00:15:48.435 "dhgroup": "ffdhe4096" 00:15:48.435 } 00:15:48.435 } 00:15:48.435 ]' 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.435 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.695 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:48.695 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.263 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.529 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.788 00:15:49.788 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.788 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.788 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.048 { 00:15:50.048 "cntlid": 75, 00:15:50.048 "qid": 0, 00:15:50.048 "state": "enabled", 00:15:50.048 "thread": "nvmf_tgt_poll_group_000", 00:15:50.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.048 "listen_address": { 00:15:50.048 "trtype": "TCP", 00:15:50.048 "adrfam": "IPv4", 00:15:50.048 "traddr": "10.0.0.2", 00:15:50.048 "trsvcid": "4420" 00:15:50.048 }, 00:15:50.048 "peer_address": { 00:15:50.048 "trtype": "TCP", 00:15:50.048 "adrfam": "IPv4", 00:15:50.048 "traddr": "10.0.0.1", 00:15:50.048 "trsvcid": "48840" 00:15:50.048 }, 00:15:50.048 "auth": { 00:15:50.048 "state": "completed", 00:15:50.048 "digest": "sha384", 00:15:50.048 "dhgroup": "ffdhe4096" 00:15:50.048 } 00:15:50.048 } 00:15:50.048 ]' 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.048 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.307 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.307 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.307 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:50.307 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.877 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.135 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.395 00:15:51.395 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.395 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.395 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.653 { 00:15:51.653 "cntlid": 77, 00:15:51.653 "qid": 0, 00:15:51.653 "state": "enabled", 00:15:51.653 "thread": "nvmf_tgt_poll_group_000", 00:15:51.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.653 "listen_address": { 00:15:51.653 "trtype": "TCP", 00:15:51.653 "adrfam": "IPv4", 00:15:51.653 "traddr": "10.0.0.2", 00:15:51.653 "trsvcid": "4420" 00:15:51.653 }, 00:15:51.653 "peer_address": { 00:15:51.653 "trtype": "TCP", 00:15:51.653 "adrfam": "IPv4", 00:15:51.653 "traddr": "10.0.0.1", 00:15:51.653 "trsvcid": "48862" 00:15:51.653 }, 00:15:51.653 "auth": { 00:15:51.653 "state": "completed", 00:15:51.653 "digest": "sha384", 00:15:51.653 "dhgroup": "ffdhe4096" 00:15:51.653 } 00:15:51.653 } 00:15:51.653 ]' 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.653 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.912 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.912 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.912 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.912 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:51.912 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:52.480 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.481 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.739 17:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.999 00:15:52.999 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.999 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.999 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.258 { 00:15:53.258 "cntlid": 79, 00:15:53.258 "qid": 0, 00:15:53.258 "state": "enabled", 00:15:53.258 "thread": "nvmf_tgt_poll_group_000", 00:15:53.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.258 "listen_address": { 00:15:53.258 "trtype": "TCP", 00:15:53.258 "adrfam": "IPv4", 00:15:53.258 "traddr": "10.0.0.2", 00:15:53.258 "trsvcid": "4420" 00:15:53.258 }, 00:15:53.258 "peer_address": { 00:15:53.258 "trtype": "TCP", 00:15:53.258 "adrfam": "IPv4", 00:15:53.258 "traddr": "10.0.0.1", 00:15:53.258 "trsvcid": "60906" 00:15:53.258 }, 00:15:53.258 "auth": { 00:15:53.258 "state": "completed", 00:15:53.258 "digest": "sha384", 00:15:53.258 "dhgroup": "ffdhe4096" 00:15:53.258 } 00:15:53.258 } 00:15:53.258 ]' 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.258 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.517 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.517 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.517 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.518 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:53.518 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:15:54.085 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.085 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.086 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.345 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.914 00:15:54.914 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.914 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.914 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.914 { 00:15:54.914 "cntlid": 81, 00:15:54.914 "qid": 0, 00:15:54.914 "state": "enabled", 00:15:54.914 "thread": "nvmf_tgt_poll_group_000", 00:15:54.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.914 "listen_address": { 00:15:54.914 "trtype": "TCP", 00:15:54.914 "adrfam": "IPv4", 00:15:54.914 "traddr": "10.0.0.2", 00:15:54.914 "trsvcid": "4420" 00:15:54.914 }, 00:15:54.914 "peer_address": { 00:15:54.914 "trtype": "TCP", 00:15:54.914 "adrfam": "IPv4", 00:15:54.914 "traddr": "10.0.0.1", 00:15:54.914 "trsvcid": "60948" 00:15:54.914 }, 00:15:54.914 "auth": { 00:15:54.914 "state": "completed", 00:15:54.914 "digest": "sha384", 00:15:54.914 "dhgroup": "ffdhe6144" 00:15:54.914 } 00:15:54.914 } 00:15:54.914 ]' 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.914 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.173 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.174 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.174 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.174 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.174 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.433 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:55.433 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:15:56.002 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.002 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.002 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.002 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.002 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.002 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.571 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.571 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.830 { 00:15:56.830 "cntlid": 83, 00:15:56.830 "qid": 0, 00:15:56.830 "state": "enabled", 00:15:56.830 "thread": "nvmf_tgt_poll_group_000", 00:15:56.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.830 "listen_address": { 00:15:56.830 "trtype": "TCP", 00:15:56.830 "adrfam": "IPv4", 00:15:56.830 "traddr": "10.0.0.2", 00:15:56.830 "trsvcid": "4420" 00:15:56.830 }, 00:15:56.830 "peer_address": { 00:15:56.830 "trtype": "TCP", 00:15:56.830 "adrfam": "IPv4", 00:15:56.830 "traddr": "10.0.0.1", 00:15:56.830 "trsvcid": "60974" 00:15:56.830 }, 00:15:56.830 "auth": { 00:15:56.830 "state": "completed", 00:15:56.830 "digest": "sha384", 00:15:56.830 "dhgroup": "ffdhe6144" 00:15:56.830 } 00:15:56.830 } 00:15:56.830 ]' 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.830 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.089 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:57.089 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:15:57.657 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.657 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.658 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.658 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.658 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.658 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.658 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.658 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.917 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.177 00:15:58.177 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.177 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.177 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.437 { 00:15:58.437 "cntlid": 85, 00:15:58.437 "qid": 0, 00:15:58.437 "state": "enabled", 00:15:58.437 "thread": "nvmf_tgt_poll_group_000", 00:15:58.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.437 "listen_address": { 00:15:58.437 "trtype": "TCP", 00:15:58.437 "adrfam": "IPv4", 00:15:58.437 "traddr": "10.0.0.2", 00:15:58.437 "trsvcid": "4420" 00:15:58.437 }, 00:15:58.437 "peer_address": { 00:15:58.437 "trtype": "TCP", 00:15:58.437 "adrfam": "IPv4", 00:15:58.437 "traddr": "10.0.0.1", 00:15:58.437 "trsvcid": "60990" 00:15:58.437 }, 00:15:58.437 "auth": { 00:15:58.437 "state": "completed", 00:15:58.437 "digest": "sha384", 00:15:58.437 "dhgroup": "ffdhe6144" 00:15:58.437 } 00:15:58.437 } 00:15:58.437 ]' 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.437 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.697 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:58.697 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.266 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.526 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.785 00:15:59.785 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.785 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.785 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.044 { 00:16:00.044 "cntlid": 87, 00:16:00.044 "qid": 0, 00:16:00.044 "state": "enabled", 00:16:00.044 "thread": "nvmf_tgt_poll_group_000", 00:16:00.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.044 "listen_address": { 00:16:00.044 "trtype": "TCP", 00:16:00.044 "adrfam": "IPv4", 00:16:00.044 "traddr": "10.0.0.2", 00:16:00.044 "trsvcid": "4420" 00:16:00.044 }, 00:16:00.044 "peer_address": { 00:16:00.044 "trtype": "TCP", 00:16:00.044 "adrfam": "IPv4", 00:16:00.044 "traddr": "10.0.0.1", 00:16:00.044 "trsvcid": "32790" 00:16:00.044 }, 00:16:00.044 "auth": { 00:16:00.044 "state": "completed", 00:16:00.044 "digest": "sha384", 00:16:00.044 "dhgroup": "ffdhe6144" 00:16:00.044 } 00:16:00.044 } 00:16:00.044 ]' 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.044 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.303 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.303 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.303 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.303 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.303 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:00.303 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:00.872 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.872 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.872 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.872 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.872 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.872 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.131 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.700 00:16:01.700 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.700 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.700 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.961 { 00:16:01.961 "cntlid": 89, 00:16:01.961 "qid": 0, 00:16:01.961 "state": "enabled", 00:16:01.961 "thread": "nvmf_tgt_poll_group_000", 00:16:01.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.961 "listen_address": { 00:16:01.961 "trtype": "TCP", 00:16:01.961 "adrfam": "IPv4", 00:16:01.961 "traddr": "10.0.0.2", 00:16:01.961 "trsvcid": "4420" 00:16:01.961 }, 00:16:01.961 "peer_address": { 00:16:01.961 "trtype": "TCP", 00:16:01.961 "adrfam": "IPv4", 00:16:01.961 "traddr": "10.0.0.1", 00:16:01.961 "trsvcid": "32800" 00:16:01.961 }, 00:16:01.961 "auth": { 00:16:01.961 "state": "completed", 00:16:01.961 "digest": "sha384", 00:16:01.961 "dhgroup": "ffdhe8192" 00:16:01.961 } 00:16:01.961 } 00:16:01.961 ]' 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.961 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.221 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:02.221 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:02.790 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.790 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.790 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.790 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.791 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.791 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.791 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.791 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.050 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.619 00:16:03.619 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.619 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.619 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.879 { 00:16:03.879 "cntlid": 91, 00:16:03.879 "qid": 0, 00:16:03.879 "state": "enabled", 00:16:03.879 "thread": "nvmf_tgt_poll_group_000", 00:16:03.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.879 "listen_address": { 00:16:03.879 "trtype": "TCP", 00:16:03.879 "adrfam": "IPv4", 00:16:03.879 "traddr": "10.0.0.2", 00:16:03.879 "trsvcid": "4420" 00:16:03.879 }, 00:16:03.879 "peer_address": { 00:16:03.879 "trtype": "TCP", 00:16:03.879 "adrfam": "IPv4", 00:16:03.879 "traddr": "10.0.0.1", 00:16:03.879 "trsvcid": "34106" 00:16:03.879 }, 00:16:03.879 "auth": { 00:16:03.879 "state": "completed", 00:16:03.879 "digest": "sha384", 00:16:03.879 "dhgroup": "ffdhe8192" 00:16:03.879 } 00:16:03.879 } 00:16:03.879 ]' 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.879 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.879 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.879 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.879 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.138 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:04.138 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.707 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.967 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.536 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.536 { 00:16:05.536 "cntlid": 93, 00:16:05.536 "qid": 0, 00:16:05.536 "state": "enabled", 00:16:05.536 "thread": "nvmf_tgt_poll_group_000", 00:16:05.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.536 "listen_address": { 00:16:05.536 "trtype": "TCP", 00:16:05.536 "adrfam": "IPv4", 00:16:05.536 "traddr": "10.0.0.2", 00:16:05.536 "trsvcid": "4420" 00:16:05.536 }, 00:16:05.536 "peer_address": { 00:16:05.536 "trtype": "TCP", 00:16:05.536 "adrfam": "IPv4", 00:16:05.536 "traddr": "10.0.0.1", 00:16:05.536 "trsvcid": "34138" 00:16:05.536 }, 00:16:05.536 "auth": { 00:16:05.536 "state": "completed", 00:16:05.536 "digest": "sha384", 00:16:05.536 "dhgroup": "ffdhe8192" 00:16:05.536 } 00:16:05.536 } 00:16:05.536 ]' 00:16:05.536 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.797 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.056 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:06.056 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.625 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.885 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.453 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.453 { 00:16:07.453 "cntlid": 95, 00:16:07.453 "qid": 0, 00:16:07.453 "state": "enabled", 00:16:07.453 "thread": "nvmf_tgt_poll_group_000", 00:16:07.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.453 "listen_address": { 00:16:07.453 "trtype": "TCP", 00:16:07.453 "adrfam": "IPv4", 00:16:07.453 "traddr": "10.0.0.2", 00:16:07.453 "trsvcid": "4420" 00:16:07.453 }, 00:16:07.453 "peer_address": { 00:16:07.453 "trtype": "TCP", 00:16:07.453 "adrfam": "IPv4", 00:16:07.453 "traddr": "10.0.0.1", 00:16:07.453 "trsvcid": "34168" 00:16:07.453 }, 00:16:07.453 "auth": { 00:16:07.453 "state": "completed", 00:16:07.453 "digest": "sha384", 00:16:07.453 "dhgroup": "ffdhe8192" 00:16:07.453 } 00:16:07.453 } 00:16:07.453 ]' 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.453 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:07.714 17:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:08.283 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.542 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.543 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.802 00:16:08.802 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.802 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.802 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.061 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.061 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.061 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.061 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.062 { 00:16:09.062 "cntlid": 97, 00:16:09.062 "qid": 0, 00:16:09.062 "state": "enabled", 00:16:09.062 "thread": "nvmf_tgt_poll_group_000", 00:16:09.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.062 "listen_address": { 00:16:09.062 "trtype": "TCP", 00:16:09.062 "adrfam": "IPv4", 00:16:09.062 "traddr": "10.0.0.2", 00:16:09.062 "trsvcid": "4420" 00:16:09.062 }, 00:16:09.062 "peer_address": { 00:16:09.062 "trtype": "TCP", 00:16:09.062 "adrfam": "IPv4", 00:16:09.062 "traddr": "10.0.0.1", 00:16:09.062 "trsvcid": "34192" 00:16:09.062 }, 00:16:09.062 "auth": { 00:16:09.062 "state": "completed", 00:16:09.062 "digest": "sha512", 00:16:09.062 "dhgroup": "null" 00:16:09.062 } 00:16:09.062 } 00:16:09.062 ]' 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.062 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.326 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.326 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.326 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.326 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:09.326 17:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.923 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.185 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.444 00:16:10.444 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.444 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.444 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.702 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.702 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.702 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.702 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.702 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.702 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.702 { 00:16:10.702 "cntlid": 99, 00:16:10.702 "qid": 0, 00:16:10.702 "state": "enabled", 00:16:10.702 "thread": "nvmf_tgt_poll_group_000", 00:16:10.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.702 "listen_address": { 00:16:10.703 "trtype": "TCP", 00:16:10.703 "adrfam": "IPv4", 00:16:10.703 "traddr": "10.0.0.2", 00:16:10.703 "trsvcid": "4420" 00:16:10.703 }, 00:16:10.703 "peer_address": { 00:16:10.703 "trtype": "TCP", 00:16:10.703 "adrfam": "IPv4", 00:16:10.703 "traddr": "10.0.0.1", 00:16:10.703 "trsvcid": "34218" 00:16:10.703 }, 00:16:10.703 "auth": { 00:16:10.703 "state": "completed", 00:16:10.703 "digest": "sha512", 00:16:10.703 "dhgroup": "null" 00:16:10.703 } 00:16:10.703 } 00:16:10.703 ]' 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.703 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.961 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:10.961 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.528 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.787 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.046 00:16:12.046 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.046 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.046 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.306 { 00:16:12.306 "cntlid": 101, 00:16:12.306 "qid": 0, 00:16:12.306 "state": "enabled", 00:16:12.306 "thread": "nvmf_tgt_poll_group_000", 00:16:12.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.306 "listen_address": { 00:16:12.306 "trtype": "TCP", 00:16:12.306 "adrfam": "IPv4", 00:16:12.306 "traddr": "10.0.0.2", 00:16:12.306 "trsvcid": "4420" 00:16:12.306 }, 00:16:12.306 "peer_address": { 00:16:12.306 "trtype": "TCP", 00:16:12.306 "adrfam": "IPv4", 00:16:12.306 "traddr": "10.0.0.1", 00:16:12.306 "trsvcid": "56668" 00:16:12.306 }, 00:16:12.306 "auth": { 00:16:12.306 "state": "completed", 00:16:12.306 "digest": "sha512", 00:16:12.306 "dhgroup": "null" 00:16:12.306 } 00:16:12.306 } 00:16:12.306 ]' 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.306 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.566 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:12.566 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.135 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.393 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.653 00:16:13.653 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.653 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.653 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.912 { 00:16:13.912 "cntlid": 103, 00:16:13.912 "qid": 0, 00:16:13.912 "state": "enabled", 00:16:13.912 "thread": "nvmf_tgt_poll_group_000", 00:16:13.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.912 "listen_address": { 00:16:13.912 "trtype": "TCP", 00:16:13.912 "adrfam": "IPv4", 00:16:13.912 "traddr": "10.0.0.2", 00:16:13.912 "trsvcid": "4420" 00:16:13.912 }, 00:16:13.912 "peer_address": { 00:16:13.912 "trtype": "TCP", 00:16:13.912 "adrfam": "IPv4", 00:16:13.912 "traddr": "10.0.0.1", 00:16:13.912 "trsvcid": "56694" 00:16:13.912 }, 00:16:13.912 "auth": { 00:16:13.912 "state": "completed", 00:16:13.912 "digest": "sha512", 00:16:13.912 "dhgroup": "null" 00:16:13.912 } 00:16:13.912 } 00:16:13.912 ]' 00:16:13.912 17:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.912 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.172 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:14.172 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.745 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.005 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.265 00:16:15.265 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.265 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.265 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.524 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.524 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.524 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.524 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.524 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.524 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.524 { 00:16:15.524 "cntlid": 105, 00:16:15.524 "qid": 0, 00:16:15.525 "state": "enabled", 00:16:15.525 "thread": "nvmf_tgt_poll_group_000", 00:16:15.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.525 "listen_address": { 00:16:15.525 "trtype": "TCP", 00:16:15.525 "adrfam": "IPv4", 00:16:15.525 "traddr": "10.0.0.2", 00:16:15.525 "trsvcid": "4420" 00:16:15.525 }, 00:16:15.525 "peer_address": { 00:16:15.525 "trtype": "TCP", 00:16:15.525 "adrfam": "IPv4", 00:16:15.525 "traddr": "10.0.0.1", 00:16:15.525 "trsvcid": "56722" 00:16:15.525 }, 00:16:15.525 "auth": { 00:16:15.525 "state": "completed", 00:16:15.525 "digest": "sha512", 00:16:15.525 "dhgroup": "ffdhe2048" 00:16:15.525 } 00:16:15.525 } 00:16:15.525 ]' 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.525 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.784 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:15.784 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.353 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.613 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.872 00:16:16.872 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.872 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.872 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.151 { 00:16:17.151 "cntlid": 107, 00:16:17.151 "qid": 0, 00:16:17.151 "state": "enabled", 00:16:17.151 "thread": "nvmf_tgt_poll_group_000", 00:16:17.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.151 "listen_address": { 00:16:17.151 "trtype": "TCP", 00:16:17.151 "adrfam": "IPv4", 00:16:17.151 "traddr": "10.0.0.2", 00:16:17.151 "trsvcid": "4420" 00:16:17.151 }, 00:16:17.151 "peer_address": { 00:16:17.151 "trtype": "TCP", 00:16:17.151 "adrfam": "IPv4", 00:16:17.151 "traddr": "10.0.0.1", 00:16:17.151 "trsvcid": "56752" 00:16:17.151 }, 00:16:17.151 "auth": { 00:16:17.151 "state": "completed", 00:16:17.151 "digest": "sha512", 00:16:17.151 "dhgroup": "ffdhe2048" 00:16:17.151 } 00:16:17.151 } 00:16:17.151 ]' 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.151 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.152 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.422 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:17.422 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.991 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.250 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.510 00:16:18.510 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.510 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.510 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.769 { 00:16:18.769 "cntlid": 109, 00:16:18.769 "qid": 0, 00:16:18.769 "state": "enabled", 00:16:18.769 "thread": "nvmf_tgt_poll_group_000", 00:16:18.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.769 "listen_address": { 00:16:18.769 "trtype": "TCP", 00:16:18.769 "adrfam": "IPv4", 00:16:18.769 "traddr": "10.0.0.2", 00:16:18.769 "trsvcid": "4420" 00:16:18.769 }, 00:16:18.769 "peer_address": { 00:16:18.769 "trtype": "TCP", 00:16:18.769 "adrfam": "IPv4", 00:16:18.769 "traddr": "10.0.0.1", 00:16:18.769 "trsvcid": "56798" 00:16:18.769 }, 00:16:18.769 "auth": { 00:16:18.769 "state": "completed", 00:16:18.769 "digest": "sha512", 00:16:18.769 "dhgroup": "ffdhe2048" 00:16:18.769 } 00:16:18.769 } 00:16:18.769 ]' 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.769 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.028 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:19.029 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.597 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.857 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.117 00:16:20.117 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.117 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.117 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.376 { 00:16:20.376 "cntlid": 111, 00:16:20.376 "qid": 0, 00:16:20.376 "state": "enabled", 00:16:20.376 "thread": "nvmf_tgt_poll_group_000", 00:16:20.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.376 "listen_address": { 00:16:20.376 "trtype": "TCP", 00:16:20.376 "adrfam": "IPv4", 00:16:20.376 "traddr": "10.0.0.2", 00:16:20.376 "trsvcid": "4420" 00:16:20.376 }, 00:16:20.376 "peer_address": { 00:16:20.376 "trtype": "TCP", 00:16:20.376 "adrfam": "IPv4", 00:16:20.376 "traddr": "10.0.0.1", 00:16:20.376 "trsvcid": "56820" 00:16:20.376 }, 00:16:20.376 "auth": { 00:16:20.376 "state": "completed", 00:16:20.376 "digest": "sha512", 00:16:20.376 "dhgroup": "ffdhe2048" 00:16:20.376 } 00:16:20.376 } 00:16:20.376 ]' 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.376 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.635 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:20.635 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.204 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.463 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:21.463 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.463 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.463 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.463 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.463 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.464 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.723 00:16:21.723 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.723 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.723 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.983 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.983 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.983 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.983 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.983 { 00:16:21.983 "cntlid": 113, 00:16:21.983 "qid": 0, 00:16:21.983 "state": "enabled", 00:16:21.983 "thread": "nvmf_tgt_poll_group_000", 00:16:21.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.983 "listen_address": { 00:16:21.983 "trtype": "TCP", 00:16:21.983 "adrfam": "IPv4", 00:16:21.983 "traddr": "10.0.0.2", 00:16:21.983 "trsvcid": "4420" 00:16:21.983 }, 00:16:21.983 "peer_address": { 00:16:21.983 "trtype": "TCP", 00:16:21.983 "adrfam": "IPv4", 00:16:21.983 "traddr": "10.0.0.1", 00:16:21.983 "trsvcid": "58960" 00:16:21.983 }, 00:16:21.983 "auth": { 00:16:21.983 "state": "completed", 00:16:21.983 "digest": "sha512", 00:16:21.983 "dhgroup": "ffdhe3072" 00:16:21.983 } 00:16:21.983 } 00:16:21.983 ]' 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.983 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.243 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:22.243 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.811 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.071 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.330 00:16:23.330 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.330 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.330 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.590 { 00:16:23.590 "cntlid": 115, 00:16:23.590 "qid": 0, 00:16:23.590 "state": "enabled", 00:16:23.590 "thread": "nvmf_tgt_poll_group_000", 00:16:23.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.590 "listen_address": { 00:16:23.590 "trtype": "TCP", 00:16:23.590 "adrfam": "IPv4", 00:16:23.590 "traddr": "10.0.0.2", 00:16:23.590 "trsvcid": "4420" 00:16:23.590 }, 00:16:23.590 "peer_address": { 00:16:23.590 "trtype": "TCP", 00:16:23.590 "adrfam": "IPv4", 00:16:23.590 "traddr": "10.0.0.1", 00:16:23.590 "trsvcid": "58988" 00:16:23.590 }, 00:16:23.590 "auth": { 00:16:23.590 "state": "completed", 00:16:23.590 "digest": "sha512", 00:16:23.590 "dhgroup": "ffdhe3072" 00:16:23.590 } 00:16:23.590 } 00:16:23.590 ]' 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.590 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.849 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:23.849 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:24.417 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.417 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.417 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.417 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.417 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.417 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.418 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.418 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.677 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.936 00:16:24.936 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.936 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.936 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.195 { 00:16:25.195 "cntlid": 117, 00:16:25.195 "qid": 0, 00:16:25.195 "state": "enabled", 00:16:25.195 "thread": "nvmf_tgt_poll_group_000", 00:16:25.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.195 "listen_address": { 00:16:25.195 "trtype": "TCP", 00:16:25.195 "adrfam": "IPv4", 00:16:25.195 "traddr": "10.0.0.2", 00:16:25.195 "trsvcid": "4420" 00:16:25.195 }, 00:16:25.195 "peer_address": { 00:16:25.195 "trtype": "TCP", 00:16:25.195 "adrfam": "IPv4", 00:16:25.195 "traddr": "10.0.0.1", 00:16:25.195 "trsvcid": "59014" 00:16:25.195 }, 00:16:25.195 "auth": { 00:16:25.195 "state": "completed", 00:16:25.195 "digest": "sha512", 00:16:25.195 "dhgroup": "ffdhe3072" 00:16:25.195 } 00:16:25.195 } 00:16:25.195 ]' 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.195 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.196 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.196 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.196 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.455 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:25.455 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.022 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.282 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.541 00:16:26.541 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.541 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.541 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.800 { 00:16:26.800 "cntlid": 119, 00:16:26.800 "qid": 0, 00:16:26.800 "state": "enabled", 00:16:26.800 "thread": "nvmf_tgt_poll_group_000", 00:16:26.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.800 "listen_address": { 00:16:26.800 "trtype": "TCP", 00:16:26.800 "adrfam": "IPv4", 00:16:26.800 "traddr": "10.0.0.2", 00:16:26.800 "trsvcid": "4420" 00:16:26.800 }, 00:16:26.800 "peer_address": { 00:16:26.800 "trtype": "TCP", 00:16:26.800 "adrfam": "IPv4", 00:16:26.800 "traddr": "10.0.0.1", 00:16:26.800 "trsvcid": "59044" 00:16:26.800 }, 00:16:26.800 "auth": { 00:16:26.800 "state": "completed", 00:16:26.800 "digest": "sha512", 00:16:26.800 "dhgroup": "ffdhe3072" 00:16:26.800 } 00:16:26.800 } 00:16:26.800 ]' 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.800 17:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.063 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:27.063 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.639 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.897 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.156 00:16:28.156 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.156 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.156 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.415 { 00:16:28.415 "cntlid": 121, 00:16:28.415 "qid": 0, 00:16:28.415 "state": "enabled", 00:16:28.415 "thread": "nvmf_tgt_poll_group_000", 00:16:28.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.415 "listen_address": { 00:16:28.415 "trtype": "TCP", 00:16:28.415 "adrfam": "IPv4", 00:16:28.415 "traddr": "10.0.0.2", 00:16:28.415 "trsvcid": "4420" 00:16:28.415 }, 00:16:28.415 "peer_address": { 00:16:28.415 "trtype": "TCP", 00:16:28.415 "adrfam": "IPv4", 00:16:28.415 "traddr": "10.0.0.1", 00:16:28.415 "trsvcid": "59066" 00:16:28.415 }, 00:16:28.415 "auth": { 00:16:28.415 "state": "completed", 00:16:28.415 "digest": "sha512", 00:16:28.415 "dhgroup": "ffdhe4096" 00:16:28.415 } 00:16:28.415 } 00:16:28.415 ]' 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.415 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.674 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:28.674 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.241 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.499 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.500 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.500 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.500 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.759 00:16:29.759 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.759 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.759 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.018 { 00:16:30.018 "cntlid": 123, 00:16:30.018 "qid": 0, 00:16:30.018 "state": "enabled", 00:16:30.018 "thread": "nvmf_tgt_poll_group_000", 00:16:30.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.018 "listen_address": { 00:16:30.018 "trtype": "TCP", 00:16:30.018 "adrfam": "IPv4", 00:16:30.018 "traddr": "10.0.0.2", 00:16:30.018 "trsvcid": "4420" 00:16:30.018 }, 00:16:30.018 "peer_address": { 00:16:30.018 "trtype": "TCP", 00:16:30.018 "adrfam": "IPv4", 00:16:30.018 "traddr": "10.0.0.1", 00:16:30.018 "trsvcid": "59076" 00:16:30.018 }, 00:16:30.018 "auth": { 00:16:30.018 "state": "completed", 00:16:30.018 "digest": "sha512", 00:16:30.018 "dhgroup": "ffdhe4096" 00:16:30.018 } 00:16:30.018 } 00:16:30.018 ]' 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.018 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.278 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.278 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.278 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.278 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:30.278 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.846 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.105 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.364 00:16:31.364 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.364 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.364 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.622 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.622 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.622 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.622 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.622 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.622 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.622 { 00:16:31.622 "cntlid": 125, 00:16:31.622 "qid": 0, 00:16:31.622 "state": "enabled", 00:16:31.622 "thread": "nvmf_tgt_poll_group_000", 00:16:31.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.623 "listen_address": { 00:16:31.623 "trtype": "TCP", 00:16:31.623 "adrfam": "IPv4", 00:16:31.623 "traddr": "10.0.0.2", 00:16:31.623 "trsvcid": "4420" 00:16:31.623 }, 00:16:31.623 "peer_address": { 00:16:31.623 "trtype": "TCP", 00:16:31.623 "adrfam": "IPv4", 00:16:31.623 "traddr": "10.0.0.1", 00:16:31.623 "trsvcid": "59102" 00:16:31.623 }, 00:16:31.623 "auth": { 00:16:31.623 "state": "completed", 00:16:31.623 "digest": "sha512", 00:16:31.623 "dhgroup": "ffdhe4096" 00:16:31.623 } 00:16:31.623 } 00:16:31.623 ]' 00:16:31.623 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.623 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.623 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.882 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.882 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.882 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.882 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.882 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.882 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:31.882 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:32.453 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.453 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.453 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.453 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.711 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.969 00:16:32.969 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.969 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.969 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.228 { 00:16:33.228 "cntlid": 127, 00:16:33.228 "qid": 0, 00:16:33.228 "state": "enabled", 00:16:33.228 "thread": "nvmf_tgt_poll_group_000", 00:16:33.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.228 "listen_address": { 00:16:33.228 "trtype": "TCP", 00:16:33.228 "adrfam": "IPv4", 00:16:33.228 "traddr": "10.0.0.2", 00:16:33.228 "trsvcid": "4420" 00:16:33.228 }, 00:16:33.228 "peer_address": { 00:16:33.228 "trtype": "TCP", 00:16:33.228 "adrfam": "IPv4", 00:16:33.228 "traddr": "10.0.0.1", 00:16:33.228 "trsvcid": "55704" 00:16:33.228 }, 00:16:33.228 "auth": { 00:16:33.228 "state": "completed", 00:16:33.228 "digest": "sha512", 00:16:33.228 "dhgroup": "ffdhe4096" 00:16:33.228 } 00:16:33.228 } 00:16:33.228 ]' 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.228 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.486 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.486 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.486 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.486 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.486 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.746 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:33.746 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.315 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.883 00:16:34.883 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.883 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.883 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.883 { 00:16:34.883 "cntlid": 129, 00:16:34.883 "qid": 0, 00:16:34.883 "state": "enabled", 00:16:34.883 "thread": "nvmf_tgt_poll_group_000", 00:16:34.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.883 "listen_address": { 00:16:34.883 "trtype": "TCP", 00:16:34.883 "adrfam": "IPv4", 00:16:34.883 "traddr": "10.0.0.2", 00:16:34.883 "trsvcid": "4420" 00:16:34.883 }, 00:16:34.883 "peer_address": { 00:16:34.883 "trtype": "TCP", 00:16:34.883 "adrfam": "IPv4", 00:16:34.883 "traddr": "10.0.0.1", 00:16:34.883 "trsvcid": "55734" 00:16:34.883 }, 00:16:34.883 "auth": { 00:16:34.883 "state": "completed", 00:16:34.883 "digest": "sha512", 00:16:34.883 "dhgroup": "ffdhe6144" 00:16:34.883 } 00:16:34.883 } 00:16:34.883 ]' 00:16:34.883 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.142 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.401 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:35.401 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:35.970 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.970 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.970 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.970 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.970 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.970 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.970 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.970 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.229 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.488 00:16:36.488 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.488 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.488 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.747 { 00:16:36.747 "cntlid": 131, 00:16:36.747 "qid": 0, 00:16:36.747 "state": "enabled", 00:16:36.747 "thread": "nvmf_tgt_poll_group_000", 00:16:36.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.747 "listen_address": { 00:16:36.747 "trtype": "TCP", 00:16:36.747 "adrfam": "IPv4", 00:16:36.747 "traddr": "10.0.0.2", 00:16:36.747 "trsvcid": "4420" 00:16:36.747 }, 00:16:36.747 "peer_address": { 00:16:36.747 "trtype": "TCP", 00:16:36.747 "adrfam": "IPv4", 00:16:36.747 "traddr": "10.0.0.1", 00:16:36.747 "trsvcid": "55760" 00:16:36.747 }, 00:16:36.747 "auth": { 00:16:36.747 "state": "completed", 00:16:36.747 "digest": "sha512", 00:16:36.747 "dhgroup": "ffdhe6144" 00:16:36.747 } 00:16:36.747 } 00:16:36.747 ]' 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.747 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.748 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.006 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:37.006 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.574 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.832 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:37.832 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.832 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.833 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.091 00:16:38.091 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.091 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.091 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.351 { 00:16:38.351 "cntlid": 133, 00:16:38.351 "qid": 0, 00:16:38.351 "state": "enabled", 00:16:38.351 "thread": "nvmf_tgt_poll_group_000", 00:16:38.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.351 "listen_address": { 00:16:38.351 "trtype": "TCP", 00:16:38.351 "adrfam": "IPv4", 00:16:38.351 "traddr": "10.0.0.2", 00:16:38.351 "trsvcid": "4420" 00:16:38.351 }, 00:16:38.351 "peer_address": { 00:16:38.351 "trtype": "TCP", 00:16:38.351 "adrfam": "IPv4", 00:16:38.351 "traddr": "10.0.0.1", 00:16:38.351 "trsvcid": "55782" 00:16:38.351 }, 00:16:38.351 "auth": { 00:16:38.351 "state": "completed", 00:16:38.351 "digest": "sha512", 00:16:38.351 "dhgroup": "ffdhe6144" 00:16:38.351 } 00:16:38.351 } 00:16:38.351 ]' 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.351 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.610 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:38.610 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.178 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.437 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:39.437 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.437 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.437 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.438 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.697 00:16:39.697 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.697 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.697 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.956 { 00:16:39.956 "cntlid": 135, 00:16:39.956 "qid": 0, 00:16:39.956 "state": "enabled", 00:16:39.956 "thread": "nvmf_tgt_poll_group_000", 00:16:39.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.956 "listen_address": { 00:16:39.956 "trtype": "TCP", 00:16:39.956 "adrfam": "IPv4", 00:16:39.956 "traddr": "10.0.0.2", 00:16:39.956 "trsvcid": "4420" 00:16:39.956 }, 00:16:39.956 "peer_address": { 00:16:39.956 "trtype": "TCP", 00:16:39.956 "adrfam": "IPv4", 00:16:39.956 "traddr": "10.0.0.1", 00:16:39.956 "trsvcid": "55810" 00:16:39.956 }, 00:16:39.956 "auth": { 00:16:39.956 "state": "completed", 00:16:39.956 "digest": "sha512", 00:16:39.956 "dhgroup": "ffdhe6144" 00:16:39.956 } 00:16:39.956 } 00:16:39.956 ]' 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.956 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:40.215 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:40.783 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.042 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.043 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.043 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.043 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.043 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.043 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.610 00:16:41.611 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.611 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.611 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.869 { 00:16:41.869 "cntlid": 137, 00:16:41.869 "qid": 0, 00:16:41.869 "state": "enabled", 00:16:41.869 "thread": "nvmf_tgt_poll_group_000", 00:16:41.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.869 "listen_address": { 00:16:41.869 "trtype": "TCP", 00:16:41.869 "adrfam": "IPv4", 00:16:41.869 "traddr": "10.0.0.2", 00:16:41.869 "trsvcid": "4420" 00:16:41.869 }, 00:16:41.869 "peer_address": { 00:16:41.869 "trtype": "TCP", 00:16:41.869 "adrfam": "IPv4", 00:16:41.869 "traddr": "10.0.0.1", 00:16:41.869 "trsvcid": "55840" 00:16:41.869 }, 00:16:41.869 "auth": { 00:16:41.869 "state": "completed", 00:16:41.869 "digest": "sha512", 00:16:41.869 "dhgroup": "ffdhe8192" 00:16:41.869 } 00:16:41.869 } 00:16:41.869 ]' 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.869 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.869 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.869 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.869 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.128 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:42.128 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.697 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.956 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.956 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.523 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.523 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.523 { 00:16:43.523 "cntlid": 139, 00:16:43.523 "qid": 0, 00:16:43.523 "state": "enabled", 00:16:43.523 "thread": "nvmf_tgt_poll_group_000", 00:16:43.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.523 "listen_address": { 00:16:43.523 "trtype": "TCP", 00:16:43.523 "adrfam": "IPv4", 00:16:43.523 "traddr": "10.0.0.2", 00:16:43.523 "trsvcid": "4420" 00:16:43.523 }, 00:16:43.523 "peer_address": { 00:16:43.523 "trtype": "TCP", 00:16:43.523 "adrfam": "IPv4", 00:16:43.523 "traddr": "10.0.0.1", 00:16:43.523 "trsvcid": "47894" 00:16:43.523 }, 00:16:43.524 "auth": { 00:16:43.524 "state": "completed", 00:16:43.524 "digest": "sha512", 00:16:43.524 "dhgroup": "ffdhe8192" 00:16:43.524 } 00:16:43.524 } 00:16:43.524 ]' 00:16:43.524 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.782 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.041 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:44.041 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: --dhchap-ctrl-secret DHHC-1:02:MGI2YzJiZjljYWRlZmFhYTZiZmI5ZTI0MTFjYzI1Nzc0ZjY2MGVmZDVmMWZmOTg0pa7HYw==: 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.608 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.609 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.867 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.867 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.867 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.867 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.126 00:16:45.126 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.126 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.126 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.384 { 00:16:45.384 "cntlid": 141, 00:16:45.384 "qid": 0, 00:16:45.384 "state": "enabled", 00:16:45.384 "thread": "nvmf_tgt_poll_group_000", 00:16:45.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.384 "listen_address": { 00:16:45.384 "trtype": "TCP", 00:16:45.384 "adrfam": "IPv4", 00:16:45.384 "traddr": "10.0.0.2", 00:16:45.384 "trsvcid": "4420" 00:16:45.384 }, 00:16:45.384 "peer_address": { 00:16:45.384 "trtype": "TCP", 00:16:45.384 "adrfam": "IPv4", 00:16:45.384 "traddr": "10.0.0.1", 00:16:45.384 "trsvcid": "47920" 00:16:45.384 }, 00:16:45.384 "auth": { 00:16:45.384 "state": "completed", 00:16:45.384 "digest": "sha512", 00:16:45.384 "dhgroup": "ffdhe8192" 00:16:45.384 } 00:16:45.384 } 00:16:45.384 ]' 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.384 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.643 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.643 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.643 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.643 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.643 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.902 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:45.902 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:01:ZTY5YjFmOTViZWEyOThlODAwM2Y4MzVhZjdhNzNjNWWWqsXp: 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.470 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.037 00:16:47.037 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.037 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.037 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.296 { 00:16:47.296 "cntlid": 143, 00:16:47.296 "qid": 0, 00:16:47.296 "state": "enabled", 00:16:47.296 "thread": "nvmf_tgt_poll_group_000", 00:16:47.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.296 "listen_address": { 00:16:47.296 "trtype": "TCP", 00:16:47.296 "adrfam": "IPv4", 00:16:47.296 "traddr": "10.0.0.2", 00:16:47.296 "trsvcid": "4420" 00:16:47.296 }, 00:16:47.296 "peer_address": { 00:16:47.296 "trtype": "TCP", 00:16:47.296 "adrfam": "IPv4", 00:16:47.296 "traddr": "10.0.0.1", 00:16:47.296 "trsvcid": "47930" 00:16:47.296 }, 00:16:47.296 "auth": { 00:16:47.296 "state": "completed", 00:16:47.296 "digest": "sha512", 00:16:47.296 "dhgroup": "ffdhe8192" 00:16:47.296 } 00:16:47.296 } 00:16:47.296 ]' 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.296 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.555 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:47.555 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.122 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.381 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.948 00:16:48.948 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.948 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.948 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.207 { 00:16:49.207 "cntlid": 145, 00:16:49.207 "qid": 0, 00:16:49.207 "state": "enabled", 00:16:49.207 "thread": "nvmf_tgt_poll_group_000", 00:16:49.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.207 "listen_address": { 00:16:49.207 "trtype": "TCP", 00:16:49.207 "adrfam": "IPv4", 00:16:49.207 "traddr": "10.0.0.2", 00:16:49.207 "trsvcid": "4420" 00:16:49.207 }, 00:16:49.207 "peer_address": { 00:16:49.207 "trtype": "TCP", 00:16:49.207 "adrfam": "IPv4", 00:16:49.207 "traddr": "10.0.0.1", 00:16:49.207 "trsvcid": "47952" 00:16:49.207 }, 00:16:49.207 "auth": { 00:16:49.207 "state": "completed", 00:16:49.207 "digest": "sha512", 00:16:49.207 "dhgroup": "ffdhe8192" 00:16:49.207 } 00:16:49.207 } 00:16:49.207 ]' 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.207 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.467 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:49.467 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWE3ODdkYmViNzVmOWNiNTg4YjhlYmZlNzUyMDAzYTAzNTlmMzkxNDlhZmZkNzVknEDSgw==: --dhchap-ctrl-secret DHHC-1:03:YmRlY2Y1ZjMwYzNiNWE0NjViZDNmY2FhYzg2ZTZhMjZjZjcxODlkNDNkODU3YjZkOThkNjk3N2RmNGUzMjliORQhSOc=: 00:16:50.034 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.034 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:50.035 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:50.626 request: 00:16:50.626 { 00:16:50.626 "name": "nvme0", 00:16:50.626 "trtype": "tcp", 00:16:50.626 "traddr": "10.0.0.2", 00:16:50.626 "adrfam": "ipv4", 00:16:50.626 "trsvcid": "4420", 00:16:50.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.626 "prchk_reftag": false, 00:16:50.626 "prchk_guard": false, 00:16:50.626 "hdgst": false, 00:16:50.626 "ddgst": false, 00:16:50.626 "dhchap_key": "key2", 00:16:50.626 "allow_unrecognized_csi": false, 00:16:50.626 "method": "bdev_nvme_attach_controller", 00:16:50.626 "req_id": 1 00:16:50.626 } 00:16:50.626 Got JSON-RPC error response 00:16:50.626 response: 00:16:50.626 { 00:16:50.626 "code": -5, 00:16:50.626 "message": "Input/output error" 00:16:50.626 } 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.626 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:50.627 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.627 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:50.627 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.627 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.627 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.627 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.950 request: 00:16:50.950 { 00:16:50.950 "name": "nvme0", 00:16:50.950 "trtype": "tcp", 00:16:50.950 "traddr": "10.0.0.2", 00:16:50.950 "adrfam": "ipv4", 00:16:50.950 "trsvcid": "4420", 00:16:50.950 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.950 "prchk_reftag": false, 00:16:50.950 "prchk_guard": false, 00:16:50.950 "hdgst": false, 00:16:50.950 "ddgst": false, 00:16:50.950 "dhchap_key": "key1", 00:16:50.950 "dhchap_ctrlr_key": "ckey2", 00:16:50.950 "allow_unrecognized_csi": false, 00:16:50.950 "method": "bdev_nvme_attach_controller", 00:16:50.950 "req_id": 1 00:16:50.950 } 00:16:50.950 Got JSON-RPC error response 00:16:50.950 response: 00:16:50.950 { 00:16:50.950 "code": -5, 00:16:50.950 "message": "Input/output error" 00:16:50.950 } 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.950 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:50.951 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.951 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.951 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.951 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.619 request: 00:16:51.619 { 00:16:51.619 "name": "nvme0", 00:16:51.619 "trtype": "tcp", 00:16:51.619 "traddr": "10.0.0.2", 00:16:51.619 "adrfam": "ipv4", 00:16:51.619 "trsvcid": "4420", 00:16:51.619 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:51.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.619 "prchk_reftag": false, 00:16:51.619 "prchk_guard": false, 00:16:51.619 "hdgst": false, 00:16:51.619 "ddgst": false, 00:16:51.619 "dhchap_key": "key1", 00:16:51.619 "dhchap_ctrlr_key": "ckey1", 00:16:51.619 "allow_unrecognized_csi": false, 00:16:51.619 "method": "bdev_nvme_attach_controller", 00:16:51.619 "req_id": 1 00:16:51.619 } 00:16:51.619 Got JSON-RPC error response 00:16:51.619 response: 00:16:51.619 { 00:16:51.619 "code": -5, 00:16:51.619 "message": "Input/output error" 00:16:51.619 } 00:16:51.619 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:51.619 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.619 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.619 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3435123 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3435123 ']' 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3435123 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435123 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435123' 00:16:51.620 killing process with pid 3435123 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3435123 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3435123 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3457368 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3457368 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3457368 ']' 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.620 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3457368 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3457368 ']' 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.878 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.137 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.137 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:52.137 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:52.137 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.137 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 null0 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zgX 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.v6F ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6F 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sFy 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.RWA ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWA 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.McA 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.3w4 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3w4 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QZO 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.396 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.329 nvme0n1 00:16:53.329 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.330 { 00:16:53.330 "cntlid": 1, 00:16:53.330 "qid": 0, 00:16:53.330 "state": "enabled", 00:16:53.330 "thread": "nvmf_tgt_poll_group_000", 00:16:53.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.330 "listen_address": { 00:16:53.330 "trtype": "TCP", 00:16:53.330 "adrfam": "IPv4", 00:16:53.330 "traddr": "10.0.0.2", 00:16:53.330 "trsvcid": "4420" 00:16:53.330 }, 00:16:53.330 "peer_address": { 00:16:53.330 "trtype": "TCP", 00:16:53.330 "adrfam": "IPv4", 00:16:53.330 "traddr": "10.0.0.1", 00:16:53.330 "trsvcid": "43178" 00:16:53.330 }, 00:16:53.330 "auth": { 00:16:53.330 "state": "completed", 00:16:53.330 "digest": "sha512", 00:16:53.330 "dhgroup": "ffdhe8192" 00:16:53.330 } 00:16:53.330 } 00:16:53.330 ]' 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.330 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.588 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:53.588 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.154 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.412 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.670 request: 00:16:54.670 { 00:16:54.670 "name": "nvme0", 00:16:54.670 "trtype": "tcp", 00:16:54.670 "traddr": "10.0.0.2", 00:16:54.670 "adrfam": "ipv4", 00:16:54.670 "trsvcid": "4420", 00:16:54.670 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.670 "prchk_reftag": false, 00:16:54.670 "prchk_guard": false, 00:16:54.670 "hdgst": false, 00:16:54.670 "ddgst": false, 00:16:54.670 "dhchap_key": "key3", 00:16:54.670 "allow_unrecognized_csi": false, 00:16:54.670 "method": "bdev_nvme_attach_controller", 00:16:54.670 "req_id": 1 00:16:54.670 } 00:16:54.670 Got JSON-RPC error response 00:16:54.670 response: 00:16:54.670 { 00:16:54.670 "code": -5, 00:16:54.670 "message": "Input/output error" 00:16:54.670 } 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:54.670 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:54.928 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:54.928 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:54.928 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:54.928 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:54.928 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.928 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:54.928 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.928 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.928 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.928 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.186 request: 00:16:55.186 { 00:16:55.186 "name": "nvme0", 00:16:55.186 "trtype": "tcp", 00:16:55.186 "traddr": "10.0.0.2", 00:16:55.186 "adrfam": "ipv4", 00:16:55.186 "trsvcid": "4420", 00:16:55.186 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.186 "prchk_reftag": false, 00:16:55.186 "prchk_guard": false, 00:16:55.186 "hdgst": false, 00:16:55.186 "ddgst": false, 00:16:55.186 "dhchap_key": "key3", 00:16:55.186 "allow_unrecognized_csi": false, 00:16:55.186 "method": "bdev_nvme_attach_controller", 00:16:55.186 "req_id": 1 00:16:55.186 } 00:16:55.186 Got JSON-RPC error response 00:16:55.186 response: 00:16:55.186 { 00:16:55.186 "code": -5, 00:16:55.186 "message": "Input/output error" 00:16:55.186 } 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.186 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:55.444 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.445 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:55.445 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.445 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.445 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.445 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.703 request: 00:16:55.703 { 00:16:55.703 "name": "nvme0", 00:16:55.703 "trtype": "tcp", 00:16:55.703 "traddr": "10.0.0.2", 00:16:55.703 "adrfam": "ipv4", 00:16:55.703 "trsvcid": "4420", 00:16:55.703 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.703 "prchk_reftag": false, 00:16:55.703 "prchk_guard": false, 00:16:55.703 "hdgst": false, 00:16:55.703 "ddgst": false, 00:16:55.703 "dhchap_key": "key0", 00:16:55.703 "dhchap_ctrlr_key": "key1", 00:16:55.703 "allow_unrecognized_csi": false, 00:16:55.703 "method": "bdev_nvme_attach_controller", 00:16:55.703 "req_id": 1 00:16:55.703 } 00:16:55.703 Got JSON-RPC error response 00:16:55.703 response: 00:16:55.703 { 00:16:55.703 "code": -5, 00:16:55.703 "message": "Input/output error" 00:16:55.703 } 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:55.703 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:55.961 nvme0n1 00:16:55.961 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:55.961 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:55.961 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.219 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.219 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.219 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:56.476 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:57.041 nvme0n1 00:16:57.041 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:57.041 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:57.041 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:57.298 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.556 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.556 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:57.556 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: --dhchap-ctrl-secret DHHC-1:03:Yjg3NmI4ZWVlNTRiODVlOTE2ZWEzNzNhNDAyODIyNGFjZmMzMzJhZDA4OTQyYzgwM2JmY2E4NWJlOTYzYmYyYv02JrA=: 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.124 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:58.383 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:58.642 request: 00:16:58.642 { 00:16:58.642 "name": "nvme0", 00:16:58.642 "trtype": "tcp", 00:16:58.642 "traddr": "10.0.0.2", 00:16:58.642 "adrfam": "ipv4", 00:16:58.642 "trsvcid": "4420", 00:16:58.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.642 "prchk_reftag": false, 00:16:58.642 "prchk_guard": false, 00:16:58.642 "hdgst": false, 00:16:58.642 "ddgst": false, 00:16:58.642 "dhchap_key": "key1", 00:16:58.642 "allow_unrecognized_csi": false, 00:16:58.642 "method": "bdev_nvme_attach_controller", 00:16:58.642 "req_id": 1 00:16:58.642 } 00:16:58.642 Got JSON-RPC error response 00:16:58.642 response: 00:16:58.642 { 00:16:58.642 "code": -5, 00:16:58.642 "message": "Input/output error" 00:16:58.642 } 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:58.900 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:59.467 nvme0n1 00:16:59.467 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:59.467 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:59.467 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.726 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.726 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.726 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:59.985 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:00.245 nvme0n1 00:17:00.245 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:00.245 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:00.245 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: '' 2s 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: ]] 00:17:00.504 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTNlYTU1MDM5ZmIxMjgyODZhNGExYjAyMGY5ZDQzZmReNpTy: 00:17:00.763 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:00.763 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:00.763 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: 2s 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: ]] 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjAwZmIxMThlMGFjMGRkZTQxYTdjOWIyOGNjODQ0YTU4OTBjZTM2YjlhMmRhYTI3GNT1hg==: 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:02.665 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:04.568 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:04.568 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:04.568 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:04.568 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:04.826 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:04.826 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:04.826 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:04.826 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.826 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:04.826 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.827 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.827 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.827 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:04.827 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:04.827 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:05.394 nvme0n1 00:17:05.653 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.653 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.653 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.653 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.653 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.653 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.912 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:05.912 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:05.912 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:06.170 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:06.430 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:06.430 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:06.430 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.688 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:06.689 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.689 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:06.689 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.254 request: 00:17:07.254 { 00:17:07.254 "name": "nvme0", 00:17:07.254 "dhchap_key": "key1", 00:17:07.254 "dhchap_ctrlr_key": "key3", 00:17:07.254 "method": "bdev_nvme_set_keys", 00:17:07.254 "req_id": 1 00:17:07.254 } 00:17:07.254 Got JSON-RPC error response 00:17:07.254 response: 00:17:07.254 { 00:17:07.254 "code": -13, 00:17:07.254 "message": "Permission denied" 00:17:07.254 } 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:07.254 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:08.632 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:09.200 nvme0n1 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:09.200 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:09.767 request: 00:17:09.767 { 00:17:09.767 "name": "nvme0", 00:17:09.767 "dhchap_key": "key2", 00:17:09.767 "dhchap_ctrlr_key": "key0", 00:17:09.767 "method": "bdev_nvme_set_keys", 00:17:09.767 "req_id": 1 00:17:09.767 } 00:17:09.767 Got JSON-RPC error response 00:17:09.767 response: 00:17:09.767 { 00:17:09.767 "code": -13, 00:17:09.767 "message": "Permission denied" 00:17:09.767 } 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:09.767 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.026 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:10.026 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:10.962 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:10.962 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:10.962 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3435151 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3435151 ']' 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3435151 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435151 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435151' 00:17:11.221 killing process with pid 3435151 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3435151 00:17:11.221 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3435151 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:11.480 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:11.480 rmmod nvme_tcp 00:17:11.480 rmmod nvme_fabrics 00:17:11.739 rmmod nvme_keyring 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3457368 ']' 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3457368 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3457368 ']' 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3457368 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457368 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457368' 00:17:11.739 killing process with pid 3457368 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3457368 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3457368 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:11.739 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.740 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zgX /tmp/spdk.key-sha256.sFy /tmp/spdk.key-sha384.McA /tmp/spdk.key-sha512.QZO /tmp/spdk.key-sha512.v6F /tmp/spdk.key-sha384.RWA /tmp/spdk.key-sha256.3w4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:14.274 00:17:14.274 real 2m33.988s 00:17:14.274 user 5m55.155s 00:17:14.274 sys 0m24.402s 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.274 ************************************ 00:17:14.274 END TEST nvmf_auth_target 00:17:14.274 ************************************ 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.274 ************************************ 00:17:14.274 START TEST nvmf_bdevio_no_huge 00:17:14.274 ************************************ 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:14.274 * Looking for test storage... 00:17:14.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.274 --rc genhtml_branch_coverage=1 00:17:14.274 --rc genhtml_function_coverage=1 00:17:14.274 --rc genhtml_legend=1 00:17:14.274 --rc geninfo_all_blocks=1 00:17:14.274 --rc geninfo_unexecuted_blocks=1 00:17:14.274 00:17:14.274 ' 00:17:14.274 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.274 --rc genhtml_branch_coverage=1 00:17:14.274 --rc genhtml_function_coverage=1 00:17:14.274 --rc genhtml_legend=1 00:17:14.275 --rc geninfo_all_blocks=1 00:17:14.275 --rc geninfo_unexecuted_blocks=1 00:17:14.275 00:17:14.275 ' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:14.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.275 --rc genhtml_branch_coverage=1 00:17:14.275 --rc genhtml_function_coverage=1 00:17:14.275 --rc genhtml_legend=1 00:17:14.275 --rc geninfo_all_blocks=1 00:17:14.275 --rc geninfo_unexecuted_blocks=1 00:17:14.275 00:17:14.275 ' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:14.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.275 --rc genhtml_branch_coverage=1 00:17:14.275 --rc genhtml_function_coverage=1 00:17:14.275 --rc genhtml_legend=1 00:17:14.275 --rc geninfo_all_blocks=1 00:17:14.275 --rc geninfo_unexecuted_blocks=1 00:17:14.275 00:17:14.275 ' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:14.275 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:20.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:20.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:20.850 Found net devices under 0000:86:00.0: cvl_0_0 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:20.850 Found net devices under 0000:86:00.1: cvl_0_1 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.850 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:20.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:17:20.851 00:17:20.851 --- 10.0.0.2 ping statistics --- 00:17:20.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.851 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:17:20.851 00:17:20.851 --- 10.0.0.1 ping statistics --- 00:17:20.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.851 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3464254 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3464254 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3464254 ']' 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.851 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.851 [2024-11-19 17:35:22.265921] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:20.851 [2024-11-19 17:35:22.265989] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:20.851 [2024-11-19 17:35:22.352192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.851 [2024-11-19 17:35:22.398783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.851 [2024-11-19 17:35:22.398817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.851 [2024-11-19 17:35:22.398824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.851 [2024-11-19 17:35:22.398830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.851 [2024-11-19 17:35:22.398835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.851 [2024-11-19 17:35:22.400027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:20.851 [2024-11-19 17:35:22.400136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:20.851 [2024-11-19 17:35:22.400241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.851 [2024-11-19 17:35:22.400242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:21.110 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.110 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:21.110 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.110 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.110 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.111 [2024-11-19 17:35:23.155498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.111 Malloc0 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.111 [2024-11-19 17:35:23.199819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:21.111 { 00:17:21.111 "params": { 00:17:21.111 "name": "Nvme$subsystem", 00:17:21.111 "trtype": "$TEST_TRANSPORT", 00:17:21.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.111 "adrfam": "ipv4", 00:17:21.111 "trsvcid": "$NVMF_PORT", 00:17:21.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.111 "hdgst": ${hdgst:-false}, 00:17:21.111 "ddgst": ${ddgst:-false} 00:17:21.111 }, 00:17:21.111 "method": "bdev_nvme_attach_controller" 00:17:21.111 } 00:17:21.111 EOF 00:17:21.111 )") 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:21.111 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:21.111 "params": { 00:17:21.111 "name": "Nvme1", 00:17:21.111 "trtype": "tcp", 00:17:21.111 "traddr": "10.0.0.2", 00:17:21.111 "adrfam": "ipv4", 00:17:21.111 "trsvcid": "4420", 00:17:21.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.111 "hdgst": false, 00:17:21.111 "ddgst": false 00:17:21.111 }, 00:17:21.111 "method": "bdev_nvme_attach_controller" 00:17:21.111 }' 00:17:21.111 [2024-11-19 17:35:23.253336] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:21.111 [2024-11-19 17:35:23.253380] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3464502 ] 00:17:21.370 [2024-11-19 17:35:23.333090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.370 [2024-11-19 17:35:23.382170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.370 [2024-11-19 17:35:23.382302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.370 [2024-11-19 17:35:23.382303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.629 I/O targets: 00:17:21.629 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:21.629 00:17:21.629 00:17:21.629 CUnit - A unit testing framework for C - Version 2.1-3 00:17:21.629 http://cunit.sourceforge.net/ 00:17:21.629 00:17:21.629 00:17:21.629 Suite: bdevio tests on: Nvme1n1 00:17:21.629 Test: blockdev write read block ...passed 00:17:21.629 Test: blockdev write zeroes read block ...passed 00:17:21.629 Test: blockdev write zeroes read no split ...passed 00:17:21.629 Test: blockdev write zeroes read split ...passed 00:17:21.629 Test: blockdev write zeroes read split partial ...passed 00:17:21.629 Test: blockdev reset ...[2024-11-19 17:35:23.832629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:21.629 [2024-11-19 17:35:23.832696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78f920 (9): Bad file descriptor 00:17:21.629 [2024-11-19 17:35:23.848058] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:21.629 passed 00:17:21.889 Test: blockdev write read 8 blocks ...passed 00:17:21.889 Test: blockdev write read size > 128k ...passed 00:17:21.889 Test: blockdev write read invalid size ...passed 00:17:21.889 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.889 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.889 Test: blockdev write read max offset ...passed 00:17:21.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.889 Test: blockdev writev readv 8 blocks ...passed 00:17:21.889 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.889 Test: blockdev writev readv block ...passed 00:17:21.889 Test: blockdev writev readv size > 128k ...passed 00:17:21.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.889 Test: blockdev comparev and writev ...[2024-11-19 17:35:24.099732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.099760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.099783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.100032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.100043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.100062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.100071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.100313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.100325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.100337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.100344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.100582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.100594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:21.889 [2024-11-19 17:35:24.100607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.889 [2024-11-19 17:35:24.100614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:22.149 passed 00:17:22.149 Test: blockdev nvme passthru rw ...passed 00:17:22.149 Test: blockdev nvme passthru vendor specific ...[2024-11-19 17:35:24.182307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.149 [2024-11-19 17:35:24.182324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:22.149 [2024-11-19 17:35:24.182427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.149 [2024-11-19 17:35:24.182437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:22.149 [2024-11-19 17:35:24.182535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.149 [2024-11-19 17:35:24.182545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:22.149 [2024-11-19 17:35:24.182649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.149 [2024-11-19 17:35:24.182660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:22.149 passed 00:17:22.149 Test: blockdev nvme admin passthru ...passed 00:17:22.149 Test: blockdev copy ...passed 00:17:22.149 00:17:22.149 Run Summary: Type Total Ran Passed Failed Inactive 00:17:22.149 suites 1 1 n/a 0 0 00:17:22.149 tests 23 23 23 0 0 00:17:22.149 asserts 152 152 152 0 n/a 00:17:22.149 00:17:22.149 Elapsed time = 1.139 seconds 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.408 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.409 rmmod nvme_tcp 00:17:22.409 rmmod nvme_fabrics 00:17:22.409 rmmod nvme_keyring 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3464254 ']' 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3464254 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3464254 ']' 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3464254 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.409 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464254 00:17:22.668 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:22.668 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:22.668 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464254' 00:17:22.668 killing process with pid 3464254 00:17:22.668 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3464254 00:17:22.668 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3464254 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.927 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:24.833 00:17:24.833 real 0m10.924s 00:17:24.833 user 0m14.075s 00:17:24.833 sys 0m5.419s 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 ************************************ 00:17:24.833 END TEST nvmf_bdevio_no_huge 00:17:24.833 ************************************ 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.833 17:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.092 ************************************ 00:17:25.092 START TEST nvmf_tls 00:17:25.092 ************************************ 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.092 * Looking for test storage... 00:17:25.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.092 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.093 --rc genhtml_branch_coverage=1 00:17:25.093 --rc genhtml_function_coverage=1 00:17:25.093 --rc genhtml_legend=1 00:17:25.093 --rc geninfo_all_blocks=1 00:17:25.093 --rc geninfo_unexecuted_blocks=1 00:17:25.093 00:17:25.093 ' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.093 --rc genhtml_branch_coverage=1 00:17:25.093 --rc genhtml_function_coverage=1 00:17:25.093 --rc genhtml_legend=1 00:17:25.093 --rc geninfo_all_blocks=1 00:17:25.093 --rc geninfo_unexecuted_blocks=1 00:17:25.093 00:17:25.093 ' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.093 --rc genhtml_branch_coverage=1 00:17:25.093 --rc genhtml_function_coverage=1 00:17:25.093 --rc genhtml_legend=1 00:17:25.093 --rc geninfo_all_blocks=1 00:17:25.093 --rc geninfo_unexecuted_blocks=1 00:17:25.093 00:17:25.093 ' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.093 --rc genhtml_branch_coverage=1 00:17:25.093 --rc genhtml_function_coverage=1 00:17:25.093 --rc genhtml_legend=1 00:17:25.093 --rc geninfo_all_blocks=1 00:17:25.093 --rc geninfo_unexecuted_blocks=1 00:17:25.093 00:17:25.093 ' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.093 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:31.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:31.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:31.665 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:31.666 Found net devices under 0000:86:00.0: cvl_0_0 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:31.666 Found net devices under 0000:86:00.1: cvl_0_1 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.666 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:31.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:17:31.666 00:17:31.666 --- 10.0.0.2 ping statistics --- 00:17:31.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.666 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:17:31.666 00:17:31.666 --- 10.0.0.1 ping statistics --- 00:17:31.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.666 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3468259 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3468259 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3468259 ']' 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.666 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.667 [2024-11-19 17:35:33.262025] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:31.667 [2024-11-19 17:35:33.262070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.667 [2024-11-19 17:35:33.340607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.667 [2024-11-19 17:35:33.379960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.667 [2024-11-19 17:35:33.379995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.667 [2024-11-19 17:35:33.380002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.667 [2024-11-19 17:35:33.380009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.667 [2024-11-19 17:35:33.380014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.667 [2024-11-19 17:35:33.380583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:31.667 true 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:31.667 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:31.926 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:31.926 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.184 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:32.184 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:32.184 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:32.443 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.443 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:32.443 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:32.443 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:32.443 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.443 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:32.701 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:32.701 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:32.701 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:32.959 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:32.959 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.218 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:33.218 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:33.218 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:33.476 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:33.477 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zQiaWWHtag 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KD54sTOC7i 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zQiaWWHtag 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KD54sTOC7i 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:33.736 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:33.995 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zQiaWWHtag 00:17:33.995 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zQiaWWHtag 00:17:33.995 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.253 [2024-11-19 17:35:36.359714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.253 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.512 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.770 [2024-11-19 17:35:36.760740] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.770 [2024-11-19 17:35:36.760970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.770 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.770 malloc0 00:17:34.770 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.028 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zQiaWWHtag 00:17:35.287 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:35.544 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zQiaWWHtag 00:17:45.646 Initializing NVMe Controllers 00:17:45.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:45.646 Initialization complete. Launching workers. 00:17:45.646 ======================================================== 00:17:45.646 Latency(us) 00:17:45.646 Device Information : IOPS MiB/s Average min max 00:17:45.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16295.49 63.65 3927.60 861.91 6270.38 00:17:45.646 ======================================================== 00:17:45.646 Total : 16295.49 63.65 3927.60 861.91 6270.38 00:17:45.646 00:17:45.646 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zQiaWWHtag 00:17:45.646 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:45.646 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zQiaWWHtag 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3470613 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3470613 /var/tmp/bdevperf.sock 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3470613 ']' 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.647 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.647 [2024-11-19 17:35:47.713827] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:45.647 [2024-11-19 17:35:47.713875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470613 ] 00:17:45.647 [2024-11-19 17:35:47.786038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.647 [2024-11-19 17:35:47.826427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.905 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.905 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:45.905 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zQiaWWHtag 00:17:46.164 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:46.164 [2024-11-19 17:35:48.298309] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.164 TLSTESTn1 00:17:46.422 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.422 Running I/O for 10 seconds... 00:17:48.289 5500.00 IOPS, 21.48 MiB/s [2024-11-19T16:35:51.886Z] 5486.50 IOPS, 21.43 MiB/s [2024-11-19T16:35:52.819Z] 5469.67 IOPS, 21.37 MiB/s [2024-11-19T16:35:53.753Z] 5485.50 IOPS, 21.43 MiB/s [2024-11-19T16:35:54.688Z] 5457.20 IOPS, 21.32 MiB/s [2024-11-19T16:35:55.621Z] 5456.17 IOPS, 21.31 MiB/s [2024-11-19T16:35:56.555Z] 5470.29 IOPS, 21.37 MiB/s [2024-11-19T16:35:57.928Z] 5448.50 IOPS, 21.28 MiB/s [2024-11-19T16:35:58.863Z] 5459.11 IOPS, 21.32 MiB/s [2024-11-19T16:35:58.863Z] 5467.50 IOPS, 21.36 MiB/s 00:17:56.640 Latency(us) 00:17:56.640 [2024-11-19T16:35:58.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:56.640 Verification LBA range: start 0x0 length 0x2000 00:17:56.640 TLSTESTn1 : 10.01 5473.15 21.38 0.00 0.00 23352.59 4616.01 23137.06 00:17:56.640 [2024-11-19T16:35:58.863Z] =================================================================================================================== 00:17:56.640 [2024-11-19T16:35:58.863Z] Total : 5473.15 21.38 0.00 0.00 23352.59 4616.01 23137.06 00:17:56.640 { 00:17:56.640 "results": [ 00:17:56.640 { 00:17:56.640 "job": "TLSTESTn1", 00:17:56.641 "core_mask": "0x4", 00:17:56.641 "workload": "verify", 00:17:56.641 "status": "finished", 00:17:56.641 "verify_range": { 00:17:56.641 "start": 0, 00:17:56.641 "length": 8192 00:17:56.641 }, 00:17:56.641 "queue_depth": 128, 00:17:56.641 "io_size": 4096, 00:17:56.641 "runtime": 10.012693, 00:17:56.641 "iops": 5473.152926989772, 00:17:56.641 "mibps": 21.379503621053797, 00:17:56.641 "io_failed": 0, 00:17:56.641 "io_timeout": 0, 00:17:56.641 "avg_latency_us": 23352.589891869633, 00:17:56.641 "min_latency_us": 4616.013913043478, 00:17:56.641 "max_latency_us": 23137.057391304348 00:17:56.641 } 00:17:56.641 ], 00:17:56.641 "core_count": 1 00:17:56.641 } 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3470613 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3470613 ']' 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3470613 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470613 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470613' 00:17:56.641 killing process with pid 3470613 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3470613 00:17:56.641 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.641 00:17:56.641 Latency(us) 00:17:56.641 [2024-11-19T16:35:58.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.641 [2024-11-19T16:35:58.864Z] =================================================================================================================== 00:17:56.641 [2024-11-19T16:35:58.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3470613 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KD54sTOC7i 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KD54sTOC7i 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KD54sTOC7i 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KD54sTOC7i 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3472447 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3472447 /var/tmp/bdevperf.sock 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3472447 ']' 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.641 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.641 [2024-11-19 17:35:58.802576] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:56.641 [2024-11-19 17:35:58.802627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472447 ] 00:17:56.899 [2024-11-19 17:35:58.875634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.899 [2024-11-19 17:35:58.913267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.899 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.899 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.899 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KD54sTOC7i 00:17:57.157 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.157 [2024-11-19 17:35:59.376457] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.416 [2024-11-19 17:35:59.381541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.416 [2024-11-19 17:35:59.382137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caa170 (107): Transport endpoint is not connected 00:17:57.416 [2024-11-19 17:35:59.383129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caa170 (9): Bad file descriptor 00:17:57.416 [2024-11-19 17:35:59.384130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:57.416 [2024-11-19 17:35:59.384140] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.416 [2024-11-19 17:35:59.384148] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:57.416 [2024-11-19 17:35:59.384159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:57.416 request: 00:17:57.416 { 00:17:57.416 "name": "TLSTEST", 00:17:57.416 "trtype": "tcp", 00:17:57.416 "traddr": "10.0.0.2", 00:17:57.416 "adrfam": "ipv4", 00:17:57.416 "trsvcid": "4420", 00:17:57.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.416 "prchk_reftag": false, 00:17:57.416 "prchk_guard": false, 00:17:57.416 "hdgst": false, 00:17:57.416 "ddgst": false, 00:17:57.416 "psk": "key0", 00:17:57.416 "allow_unrecognized_csi": false, 00:17:57.416 "method": "bdev_nvme_attach_controller", 00:17:57.416 "req_id": 1 00:17:57.416 } 00:17:57.416 Got JSON-RPC error response 00:17:57.416 response: 00:17:57.416 { 00:17:57.416 "code": -5, 00:17:57.416 "message": "Input/output error" 00:17:57.416 } 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3472447 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3472447 ']' 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3472447 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472447 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472447' 00:17:57.416 killing process with pid 3472447 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3472447 00:17:57.416 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.416 00:17:57.416 Latency(us) 00:17:57.416 [2024-11-19T16:35:59.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.416 [2024-11-19T16:35:59.639Z] =================================================================================================================== 00:17:57.416 [2024-11-19T16:35:59.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3472447 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zQiaWWHtag 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zQiaWWHtag 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zQiaWWHtag 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zQiaWWHtag 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3472525 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3472525 /var/tmp/bdevperf.sock 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3472525 ']' 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.416 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.674 [2024-11-19 17:35:59.662115] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:57.674 [2024-11-19 17:35:59.662168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472525 ] 00:17:57.674 [2024-11-19 17:35:59.736559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.674 [2024-11-19 17:35:59.779388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.674 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.674 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.674 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zQiaWWHtag 00:17:57.932 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:58.190 [2024-11-19 17:36:00.258529] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.190 [2024-11-19 17:36:00.263246] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:58.190 [2024-11-19 17:36:00.263271] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:58.190 [2024-11-19 17:36:00.263297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.190 [2024-11-19 17:36:00.263964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177170 (107): Transport endpoint is not connected 00:17:58.190 [2024-11-19 17:36:00.264955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177170 (9): Bad file descriptor 00:17:58.190 [2024-11-19 17:36:00.265958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:58.190 [2024-11-19 17:36:00.265969] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.190 [2024-11-19 17:36:00.265977] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:58.190 [2024-11-19 17:36:00.265987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:58.190 request: 00:17:58.190 { 00:17:58.190 "name": "TLSTEST", 00:17:58.190 "trtype": "tcp", 00:17:58.190 "traddr": "10.0.0.2", 00:17:58.190 "adrfam": "ipv4", 00:17:58.190 "trsvcid": "4420", 00:17:58.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.190 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:58.190 "prchk_reftag": false, 00:17:58.190 "prchk_guard": false, 00:17:58.190 "hdgst": false, 00:17:58.190 "ddgst": false, 00:17:58.190 "psk": "key0", 00:17:58.190 "allow_unrecognized_csi": false, 00:17:58.190 "method": "bdev_nvme_attach_controller", 00:17:58.190 "req_id": 1 00:17:58.190 } 00:17:58.190 Got JSON-RPC error response 00:17:58.190 response: 00:17:58.190 { 00:17:58.190 "code": -5, 00:17:58.190 "message": "Input/output error" 00:17:58.190 } 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3472525 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3472525 ']' 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3472525 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472525 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.190 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.191 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472525' 00:17:58.191 killing process with pid 3472525 00:17:58.191 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3472525 00:17:58.191 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.191 00:17:58.191 Latency(us) 00:17:58.191 [2024-11-19T16:36:00.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.191 [2024-11-19T16:36:00.414Z] =================================================================================================================== 00:17:58.191 [2024-11-19T16:36:00.414Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.191 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3472525 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zQiaWWHtag 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zQiaWWHtag 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zQiaWWHtag 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zQiaWWHtag 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3472742 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3472742 /var/tmp/bdevperf.sock 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3472742 ']' 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.450 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.451 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.451 [2024-11-19 17:36:00.550564] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:58.451 [2024-11-19 17:36:00.550618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472742 ] 00:17:58.451 [2024-11-19 17:36:00.622699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.451 [2024-11-19 17:36:00.660258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.709 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.709 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.709 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zQiaWWHtag 00:17:58.967 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.967 [2024-11-19 17:36:01.143838] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.967 [2024-11-19 17:36:01.153702] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.967 [2024-11-19 17:36:01.153724] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.967 [2024-11-19 17:36:01.153750] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.967 [2024-11-19 17:36:01.154248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12170 (107): Transport endpoint is not connected 00:17:58.967 [2024-11-19 17:36:01.155242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12170 (9): Bad file descriptor 00:17:58.967 [2024-11-19 17:36:01.156243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:58.967 [2024-11-19 17:36:01.156255] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.967 [2024-11-19 17:36:01.156262] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:58.967 [2024-11-19 17:36:01.156272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:58.967 request: 00:17:58.967 { 00:17:58.967 "name": "TLSTEST", 00:17:58.967 "trtype": "tcp", 00:17:58.967 "traddr": "10.0.0.2", 00:17:58.967 "adrfam": "ipv4", 00:17:58.967 "trsvcid": "4420", 00:17:58.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:58.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.967 "prchk_reftag": false, 00:17:58.967 "prchk_guard": false, 00:17:58.967 "hdgst": false, 00:17:58.967 "ddgst": false, 00:17:58.967 "psk": "key0", 00:17:58.967 "allow_unrecognized_csi": false, 00:17:58.967 "method": "bdev_nvme_attach_controller", 00:17:58.967 "req_id": 1 00:17:58.967 } 00:17:58.967 Got JSON-RPC error response 00:17:58.967 response: 00:17:58.967 { 00:17:58.967 "code": -5, 00:17:58.967 "message": "Input/output error" 00:17:58.967 } 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3472742 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3472742 ']' 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3472742 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472742 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472742' 00:17:59.225 killing process with pid 3472742 00:17:59.225 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3472742 00:17:59.225 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.225 00:17:59.225 Latency(us) 00:17:59.225 [2024-11-19T16:36:01.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.225 [2024-11-19T16:36:01.448Z] =================================================================================================================== 00:17:59.225 [2024-11-19T16:36:01.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3472742 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3473036 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3473036 /var/tmp/bdevperf.sock 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3473036 ']' 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.226 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.226 [2024-11-19 17:36:01.440680] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:17:59.226 [2024-11-19 17:36:01.440733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473036 ] 00:17:59.484 [2024-11-19 17:36:01.515718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.484 [2024-11-19 17:36:01.557780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.484 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.484 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.484 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:59.742 [2024-11-19 17:36:01.823905] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:59.742 [2024-11-19 17:36:01.823951] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:59.742 request: 00:17:59.742 { 00:17:59.742 "name": "key0", 00:17:59.742 "path": "", 00:17:59.742 "method": "keyring_file_add_key", 00:17:59.742 "req_id": 1 00:17:59.742 } 00:17:59.742 Got JSON-RPC error response 00:17:59.742 response: 00:17:59.742 { 00:17:59.742 "code": -1, 00:17:59.742 "message": "Operation not permitted" 00:17:59.742 } 00:17:59.742 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:00.000 [2024-11-19 17:36:02.020506] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.000 [2024-11-19 17:36:02.020539] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:00.000 request: 00:18:00.000 { 00:18:00.000 "name": "TLSTEST", 00:18:00.000 "trtype": "tcp", 00:18:00.000 "traddr": "10.0.0.2", 00:18:00.000 "adrfam": "ipv4", 00:18:00.000 "trsvcid": "4420", 00:18:00.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.000 "prchk_reftag": false, 00:18:00.000 "prchk_guard": false, 00:18:00.000 "hdgst": false, 00:18:00.000 "ddgst": false, 00:18:00.000 "psk": "key0", 00:18:00.000 "allow_unrecognized_csi": false, 00:18:00.000 "method": "bdev_nvme_attach_controller", 00:18:00.000 "req_id": 1 00:18:00.000 } 00:18:00.000 Got JSON-RPC error response 00:18:00.000 response: 00:18:00.000 { 00:18:00.000 "code": -126, 00:18:00.000 "message": "Required key not available" 00:18:00.000 } 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3473036 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3473036 ']' 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3473036 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3473036 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:00.000 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3473036' 00:18:00.000 killing process with pid 3473036 00:18:00.001 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3473036 00:18:00.001 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.001 00:18:00.001 Latency(us) 00:18:00.001 [2024-11-19T16:36:02.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.001 [2024-11-19T16:36:02.224Z] =================================================================================================================== 00:18:00.001 [2024-11-19T16:36:02.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.001 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3473036 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3468259 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3468259 ']' 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3468259 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468259 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468259' 00:18:00.260 killing process with pid 3468259 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3468259 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3468259 00:18:00.260 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.e8Dh19KBkV 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.e8Dh19KBkV 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3473204 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3473204 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3473204 ']' 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.519 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.519 [2024-11-19 17:36:02.592799] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:00.519 [2024-11-19 17:36:02.592847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.519 [2024-11-19 17:36:02.669434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.519 [2024-11-19 17:36:02.710789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.519 [2024-11-19 17:36:02.710828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.519 [2024-11-19 17:36:02.710835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.519 [2024-11-19 17:36:02.710841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.519 [2024-11-19 17:36:02.710846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.519 [2024-11-19 17:36:02.711428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.e8Dh19KBkV 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.e8Dh19KBkV 00:18:00.779 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.037 [2024-11-19 17:36:03.015930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.037 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.037 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.296 [2024-11-19 17:36:03.416970] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.296 [2024-11-19 17:36:03.417173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.296 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.555 malloc0 00:18:01.555 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.813 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e8Dh19KBkV 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.e8Dh19KBkV 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3473566 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3473566 /var/tmp/bdevperf.sock 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3473566 ']' 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.072 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.330 [2024-11-19 17:36:04.316730] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:02.330 [2024-11-19 17:36:04.316782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473566 ] 00:18:02.330 [2024-11-19 17:36:04.390071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.330 [2024-11-19 17:36:04.430666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.330 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.330 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:02.330 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:02.588 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.847 [2024-11-19 17:36:04.906249] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.847 TLSTESTn1 00:18:02.847 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.105 Running I/O for 10 seconds... 00:18:04.971 5439.00 IOPS, 21.25 MiB/s [2024-11-19T16:36:08.129Z] 5281.50 IOPS, 20.63 MiB/s [2024-11-19T16:36:09.500Z] 5100.33 IOPS, 19.92 MiB/s [2024-11-19T16:36:10.434Z] 5064.75 IOPS, 19.78 MiB/s [2024-11-19T16:36:11.368Z] 5008.60 IOPS, 19.56 MiB/s [2024-11-19T16:36:12.303Z] 4964.67 IOPS, 19.39 MiB/s [2024-11-19T16:36:13.239Z] 4940.29 IOPS, 19.30 MiB/s [2024-11-19T16:36:14.174Z] 4945.75 IOPS, 19.32 MiB/s [2024-11-19T16:36:15.551Z] 4943.89 IOPS, 19.31 MiB/s [2024-11-19T16:36:15.551Z] 4958.60 IOPS, 19.37 MiB/s 00:18:13.328 Latency(us) 00:18:13.328 [2024-11-19T16:36:15.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.328 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.328 Verification LBA range: start 0x0 length 0x2000 00:18:13.328 TLSTESTn1 : 10.02 4962.60 19.39 0.00 0.00 25754.94 6753.06 31457.28 00:18:13.328 [2024-11-19T16:36:15.551Z] =================================================================================================================== 00:18:13.328 [2024-11-19T16:36:15.551Z] Total : 4962.60 19.39 0.00 0.00 25754.94 6753.06 31457.28 00:18:13.328 { 00:18:13.328 "results": [ 00:18:13.328 { 00:18:13.328 "job": "TLSTESTn1", 00:18:13.328 "core_mask": "0x4", 00:18:13.328 "workload": "verify", 00:18:13.328 "status": "finished", 00:18:13.328 "verify_range": { 00:18:13.328 "start": 0, 00:18:13.328 "length": 8192 00:18:13.328 }, 00:18:13.328 "queue_depth": 128, 00:18:13.328 "io_size": 4096, 00:18:13.328 "runtime": 10.017532, 00:18:13.328 "iops": 4962.599570433116, 00:18:13.328 "mibps": 19.38515457200436, 00:18:13.328 "io_failed": 0, 00:18:13.328 "io_timeout": 0, 00:18:13.328 "avg_latency_us": 25754.936870156438, 00:18:13.328 "min_latency_us": 6753.057391304348, 00:18:13.328 "max_latency_us": 31457.28 00:18:13.328 } 00:18:13.328 ], 00:18:13.328 "core_count": 1 00:18:13.328 } 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3473566 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3473566 ']' 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3473566 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3473566 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3473566' 00:18:13.328 killing process with pid 3473566 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3473566 00:18:13.328 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.328 00:18:13.328 Latency(us) 00:18:13.328 [2024-11-19T16:36:15.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.328 [2024-11-19T16:36:15.551Z] =================================================================================================================== 00:18:13.328 [2024-11-19T16:36:15.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3473566 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.e8Dh19KBkV 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e8Dh19KBkV 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e8Dh19KBkV 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e8Dh19KBkV 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.e8Dh19KBkV 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3475709 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3475709 /var/tmp/bdevperf.sock 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3475709 ']' 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.328 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.328 [2024-11-19 17:36:15.433183] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:13.328 [2024-11-19 17:36:15.433233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475709 ] 00:18:13.328 [2024-11-19 17:36:15.509018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.587 [2024-11-19 17:36:15.548091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.587 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.587 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.587 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:13.846 [2024-11-19 17:36:15.814732] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.e8Dh19KBkV': 0100666 00:18:13.846 [2024-11-19 17:36:15.814765] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:13.846 request: 00:18:13.846 { 00:18:13.846 "name": "key0", 00:18:13.846 "path": "/tmp/tmp.e8Dh19KBkV", 00:18:13.846 "method": "keyring_file_add_key", 00:18:13.846 "req_id": 1 00:18:13.846 } 00:18:13.846 Got JSON-RPC error response 00:18:13.846 response: 00:18:13.846 { 00:18:13.846 "code": -1, 00:18:13.846 "message": "Operation not permitted" 00:18:13.846 } 00:18:13.846 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.846 [2024-11-19 17:36:16.007311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.846 [2024-11-19 17:36:16.007337] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:13.846 request: 00:18:13.846 { 00:18:13.846 "name": "TLSTEST", 00:18:13.846 "trtype": "tcp", 00:18:13.846 "traddr": "10.0.0.2", 00:18:13.846 "adrfam": "ipv4", 00:18:13.846 "trsvcid": "4420", 00:18:13.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.846 "prchk_reftag": false, 00:18:13.846 "prchk_guard": false, 00:18:13.846 "hdgst": false, 00:18:13.846 "ddgst": false, 00:18:13.846 "psk": "key0", 00:18:13.846 "allow_unrecognized_csi": false, 00:18:13.846 "method": "bdev_nvme_attach_controller", 00:18:13.846 "req_id": 1 00:18:13.846 } 00:18:13.846 Got JSON-RPC error response 00:18:13.846 response: 00:18:13.846 { 00:18:13.846 "code": -126, 00:18:13.846 "message": "Required key not available" 00:18:13.846 } 00:18:13.846 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3475709 00:18:13.846 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3475709 ']' 00:18:13.846 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3475709 00:18:13.846 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.846 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.846 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475709 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475709' 00:18:14.105 killing process with pid 3475709 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3475709 00:18:14.105 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.105 00:18:14.105 Latency(us) 00:18:14.105 [2024-11-19T16:36:16.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.105 [2024-11-19T16:36:16.328Z] =================================================================================================================== 00:18:14.105 [2024-11-19T16:36:16.328Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3475709 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3473204 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3473204 ']' 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3473204 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3473204 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3473204' 00:18:14.105 killing process with pid 3473204 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3473204 00:18:14.105 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3473204 00:18:14.363 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:14.363 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3475815 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3475815 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3475815 ']' 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.364 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.364 [2024-11-19 17:36:16.495817] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:14.364 [2024-11-19 17:36:16.495865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.364 [2024-11-19 17:36:16.556269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.622 [2024-11-19 17:36:16.599066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.622 [2024-11-19 17:36:16.599095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.622 [2024-11-19 17:36:16.599102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.622 [2024-11-19 17:36:16.599110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.622 [2024-11-19 17:36:16.599116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.622 [2024-11-19 17:36:16.599550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.e8Dh19KBkV 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.e8Dh19KBkV 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.e8Dh19KBkV 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.e8Dh19KBkV 00:18:14.622 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:14.881 [2024-11-19 17:36:16.919664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.881 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.140 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.140 [2024-11-19 17:36:17.300664] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.140 [2024-11-19 17:36:17.300867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.140 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.399 malloc0 00:18:15.399 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.658 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:15.658 [2024-11-19 17:36:17.850197] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.e8Dh19KBkV': 0100666 00:18:15.658 [2024-11-19 17:36:17.850227] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:15.658 request: 00:18:15.658 { 00:18:15.658 "name": "key0", 00:18:15.658 "path": "/tmp/tmp.e8Dh19KBkV", 00:18:15.658 "method": "keyring_file_add_key", 00:18:15.658 "req_id": 1 00:18:15.658 } 00:18:15.658 Got JSON-RPC error response 00:18:15.658 response: 00:18:15.658 { 00:18:15.658 "code": -1, 00:18:15.658 "message": "Operation not permitted" 00:18:15.658 } 00:18:15.658 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.918 [2024-11-19 17:36:18.026685] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:15.918 [2024-11-19 17:36:18.026720] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:15.918 request: 00:18:15.918 { 00:18:15.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.918 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.918 "psk": "key0", 00:18:15.918 "method": "nvmf_subsystem_add_host", 00:18:15.918 "req_id": 1 00:18:15.918 } 00:18:15.918 Got JSON-RPC error response 00:18:15.918 response: 00:18:15.918 { 00:18:15.918 "code": -32603, 00:18:15.918 "message": "Internal error" 00:18:15.918 } 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3475815 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3475815 ']' 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3475815 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475815 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475815' 00:18:15.918 killing process with pid 3475815 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3475815 00:18:15.918 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3475815 00:18:16.177 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.e8Dh19KBkV 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3476251 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3476251 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3476251 ']' 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.178 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.178 [2024-11-19 17:36:18.311666] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:16.178 [2024-11-19 17:36:18.311712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.178 [2024-11-19 17:36:18.391941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.437 [2024-11-19 17:36:18.432833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.437 [2024-11-19 17:36:18.432868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.437 [2024-11-19 17:36:18.432875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.437 [2024-11-19 17:36:18.432881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.437 [2024-11-19 17:36:18.432886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.437 [2024-11-19 17:36:18.433485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.e8Dh19KBkV 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.e8Dh19KBkV 00:18:16.437 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.696 [2024-11-19 17:36:18.741301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.696 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.955 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.955 [2024-11-19 17:36:19.138325] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.955 [2024-11-19 17:36:19.138534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.955 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.213 malloc0 00:18:17.213 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.472 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:17.730 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3476557 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3476557 /var/tmp/bdevperf.sock 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3476557 ']' 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.989 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.989 [2024-11-19 17:36:20.000966] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:17.989 [2024-11-19 17:36:20.001016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476557 ] 00:18:17.989 [2024-11-19 17:36:20.075730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.989 [2024-11-19 17:36:20.117636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.248 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.248 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.248 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:18.248 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.507 [2024-11-19 17:36:20.584588] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.507 TLSTESTn1 00:18:18.507 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:18.767 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:18.767 "subsystems": [ 00:18:18.767 { 00:18:18.767 "subsystem": "keyring", 00:18:18.767 "config": [ 00:18:18.767 { 00:18:18.767 "method": "keyring_file_add_key", 00:18:18.767 "params": { 00:18:18.767 "name": "key0", 00:18:18.767 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:18.767 } 00:18:18.767 } 00:18:18.767 ] 00:18:18.767 }, 00:18:18.767 { 00:18:18.767 "subsystem": "iobuf", 00:18:18.767 "config": [ 00:18:18.767 { 00:18:18.767 "method": "iobuf_set_options", 00:18:18.767 "params": { 00:18:18.767 "small_pool_count": 8192, 00:18:18.767 "large_pool_count": 1024, 00:18:18.767 "small_bufsize": 8192, 00:18:18.767 "large_bufsize": 135168, 00:18:18.767 "enable_numa": false 00:18:18.767 } 00:18:18.767 } 00:18:18.767 ] 00:18:18.767 }, 00:18:18.767 { 00:18:18.767 "subsystem": "sock", 00:18:18.767 "config": [ 00:18:18.767 { 00:18:18.767 "method": "sock_set_default_impl", 00:18:18.767 "params": { 00:18:18.767 "impl_name": "posix" 00:18:18.767 } 00:18:18.767 }, 00:18:18.767 { 00:18:18.767 "method": "sock_impl_set_options", 00:18:18.767 "params": { 00:18:18.767 "impl_name": "ssl", 00:18:18.767 "recv_buf_size": 4096, 00:18:18.767 "send_buf_size": 4096, 00:18:18.767 "enable_recv_pipe": true, 00:18:18.767 "enable_quickack": false, 00:18:18.767 "enable_placement_id": 0, 00:18:18.767 "enable_zerocopy_send_server": true, 00:18:18.767 "enable_zerocopy_send_client": false, 00:18:18.767 "zerocopy_threshold": 0, 00:18:18.767 "tls_version": 0, 00:18:18.767 "enable_ktls": false 00:18:18.767 } 00:18:18.767 }, 00:18:18.767 { 00:18:18.767 "method": "sock_impl_set_options", 00:18:18.767 "params": { 00:18:18.767 "impl_name": "posix", 00:18:18.767 "recv_buf_size": 2097152, 00:18:18.767 "send_buf_size": 2097152, 00:18:18.767 "enable_recv_pipe": true, 00:18:18.767 "enable_quickack": false, 00:18:18.767 "enable_placement_id": 0, 00:18:18.767 "enable_zerocopy_send_server": true, 00:18:18.767 "enable_zerocopy_send_client": false, 00:18:18.767 "zerocopy_threshold": 0, 00:18:18.767 "tls_version": 0, 00:18:18.767 "enable_ktls": false 00:18:18.767 } 00:18:18.767 } 00:18:18.767 ] 00:18:18.767 }, 00:18:18.767 { 00:18:18.767 "subsystem": "vmd", 00:18:18.767 "config": [] 00:18:18.767 }, 00:18:18.767 { 00:18:18.767 "subsystem": "accel", 00:18:18.767 "config": [ 00:18:18.767 { 00:18:18.767 "method": "accel_set_options", 00:18:18.767 "params": { 00:18:18.767 "small_cache_size": 128, 00:18:18.767 "large_cache_size": 16, 00:18:18.767 "task_count": 2048, 00:18:18.767 "sequence_count": 2048, 00:18:18.767 "buf_count": 2048 00:18:18.767 } 00:18:18.767 } 00:18:18.767 ] 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "subsystem": "bdev", 00:18:18.768 "config": [ 00:18:18.768 { 00:18:18.768 "method": "bdev_set_options", 00:18:18.768 "params": { 00:18:18.768 "bdev_io_pool_size": 65535, 00:18:18.768 "bdev_io_cache_size": 256, 00:18:18.768 "bdev_auto_examine": true, 00:18:18.768 "iobuf_small_cache_size": 128, 00:18:18.768 "iobuf_large_cache_size": 16 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "bdev_raid_set_options", 00:18:18.768 "params": { 00:18:18.768 "process_window_size_kb": 1024, 00:18:18.768 "process_max_bandwidth_mb_sec": 0 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "bdev_iscsi_set_options", 00:18:18.768 "params": { 00:18:18.768 "timeout_sec": 30 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "bdev_nvme_set_options", 00:18:18.768 "params": { 00:18:18.768 "action_on_timeout": "none", 00:18:18.768 "timeout_us": 0, 00:18:18.768 "timeout_admin_us": 0, 00:18:18.768 "keep_alive_timeout_ms": 10000, 00:18:18.768 "arbitration_burst": 0, 00:18:18.768 "low_priority_weight": 0, 00:18:18.768 "medium_priority_weight": 0, 00:18:18.768 "high_priority_weight": 0, 00:18:18.768 "nvme_adminq_poll_period_us": 10000, 00:18:18.768 "nvme_ioq_poll_period_us": 0, 00:18:18.768 "io_queue_requests": 0, 00:18:18.768 "delay_cmd_submit": true, 00:18:18.768 "transport_retry_count": 4, 00:18:18.768 "bdev_retry_count": 3, 00:18:18.768 "transport_ack_timeout": 0, 00:18:18.768 "ctrlr_loss_timeout_sec": 0, 00:18:18.768 "reconnect_delay_sec": 0, 00:18:18.768 "fast_io_fail_timeout_sec": 0, 00:18:18.768 "disable_auto_failback": false, 00:18:18.768 "generate_uuids": false, 00:18:18.768 "transport_tos": 0, 00:18:18.768 "nvme_error_stat": false, 00:18:18.768 "rdma_srq_size": 0, 00:18:18.768 "io_path_stat": false, 00:18:18.768 "allow_accel_sequence": false, 00:18:18.768 "rdma_max_cq_size": 0, 00:18:18.768 "rdma_cm_event_timeout_ms": 0, 00:18:18.768 "dhchap_digests": [ 00:18:18.768 "sha256", 00:18:18.768 "sha384", 00:18:18.768 "sha512" 00:18:18.768 ], 00:18:18.768 "dhchap_dhgroups": [ 00:18:18.768 "null", 00:18:18.768 "ffdhe2048", 00:18:18.768 "ffdhe3072", 00:18:18.768 "ffdhe4096", 00:18:18.768 "ffdhe6144", 00:18:18.768 "ffdhe8192" 00:18:18.768 ] 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "bdev_nvme_set_hotplug", 00:18:18.768 "params": { 00:18:18.768 "period_us": 100000, 00:18:18.768 "enable": false 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "bdev_malloc_create", 00:18:18.768 "params": { 00:18:18.768 "name": "malloc0", 00:18:18.768 "num_blocks": 8192, 00:18:18.768 "block_size": 4096, 00:18:18.768 "physical_block_size": 4096, 00:18:18.768 "uuid": "0141cbbb-fbd9-40d1-bdbb-e5ae9ac1f59e", 00:18:18.768 "optimal_io_boundary": 0, 00:18:18.768 "md_size": 0, 00:18:18.768 "dif_type": 0, 00:18:18.768 "dif_is_head_of_md": false, 00:18:18.768 "dif_pi_format": 0 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "bdev_wait_for_examine" 00:18:18.768 } 00:18:18.768 ] 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "subsystem": "nbd", 00:18:18.768 "config": [] 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "subsystem": "scheduler", 00:18:18.768 "config": [ 00:18:18.768 { 00:18:18.768 "method": "framework_set_scheduler", 00:18:18.768 "params": { 00:18:18.768 "name": "static" 00:18:18.768 } 00:18:18.768 } 00:18:18.768 ] 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "subsystem": "nvmf", 00:18:18.768 "config": [ 00:18:18.768 { 00:18:18.768 "method": "nvmf_set_config", 00:18:18.768 "params": { 00:18:18.768 "discovery_filter": "match_any", 00:18:18.768 "admin_cmd_passthru": { 00:18:18.768 "identify_ctrlr": false 00:18:18.768 }, 00:18:18.768 "dhchap_digests": [ 00:18:18.768 "sha256", 00:18:18.768 "sha384", 00:18:18.768 "sha512" 00:18:18.768 ], 00:18:18.768 "dhchap_dhgroups": [ 00:18:18.768 "null", 00:18:18.768 "ffdhe2048", 00:18:18.768 "ffdhe3072", 00:18:18.768 "ffdhe4096", 00:18:18.768 "ffdhe6144", 00:18:18.768 "ffdhe8192" 00:18:18.768 ] 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_set_max_subsystems", 00:18:18.768 "params": { 00:18:18.768 "max_subsystems": 1024 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_set_crdt", 00:18:18.768 "params": { 00:18:18.768 "crdt1": 0, 00:18:18.768 "crdt2": 0, 00:18:18.768 "crdt3": 0 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_create_transport", 00:18:18.768 "params": { 00:18:18.768 "trtype": "TCP", 00:18:18.768 "max_queue_depth": 128, 00:18:18.768 "max_io_qpairs_per_ctrlr": 127, 00:18:18.768 "in_capsule_data_size": 4096, 00:18:18.768 "max_io_size": 131072, 00:18:18.768 "io_unit_size": 131072, 00:18:18.768 "max_aq_depth": 128, 00:18:18.768 "num_shared_buffers": 511, 00:18:18.768 "buf_cache_size": 4294967295, 00:18:18.768 "dif_insert_or_strip": false, 00:18:18.768 "zcopy": false, 00:18:18.768 "c2h_success": false, 00:18:18.768 "sock_priority": 0, 00:18:18.768 "abort_timeout_sec": 1, 00:18:18.768 "ack_timeout": 0, 00:18:18.768 "data_wr_pool_size": 0 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_create_subsystem", 00:18:18.768 "params": { 00:18:18.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.768 "allow_any_host": false, 00:18:18.768 "serial_number": "SPDK00000000000001", 00:18:18.768 "model_number": "SPDK bdev Controller", 00:18:18.768 "max_namespaces": 10, 00:18:18.768 "min_cntlid": 1, 00:18:18.768 "max_cntlid": 65519, 00:18:18.768 "ana_reporting": false 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_subsystem_add_host", 00:18:18.768 "params": { 00:18:18.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.768 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.768 "psk": "key0" 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_subsystem_add_ns", 00:18:18.768 "params": { 00:18:18.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.768 "namespace": { 00:18:18.768 "nsid": 1, 00:18:18.768 "bdev_name": "malloc0", 00:18:18.768 "nguid": "0141CBBBFBD940D1BDBBE5AE9AC1F59E", 00:18:18.768 "uuid": "0141cbbb-fbd9-40d1-bdbb-e5ae9ac1f59e", 00:18:18.768 "no_auto_visible": false 00:18:18.768 } 00:18:18.768 } 00:18:18.768 }, 00:18:18.768 { 00:18:18.768 "method": "nvmf_subsystem_add_listener", 00:18:18.768 "params": { 00:18:18.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.768 "listen_address": { 00:18:18.768 "trtype": "TCP", 00:18:18.768 "adrfam": "IPv4", 00:18:18.768 "traddr": "10.0.0.2", 00:18:18.768 "trsvcid": "4420" 00:18:18.768 }, 00:18:18.768 "secure_channel": true 00:18:18.768 } 00:18:18.768 } 00:18:18.768 ] 00:18:18.768 } 00:18:18.768 ] 00:18:18.768 }' 00:18:18.768 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:19.028 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:19.028 "subsystems": [ 00:18:19.028 { 00:18:19.028 "subsystem": "keyring", 00:18:19.028 "config": [ 00:18:19.028 { 00:18:19.028 "method": "keyring_file_add_key", 00:18:19.028 "params": { 00:18:19.028 "name": "key0", 00:18:19.028 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:19.028 } 00:18:19.028 } 00:18:19.028 ] 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "subsystem": "iobuf", 00:18:19.028 "config": [ 00:18:19.028 { 00:18:19.028 "method": "iobuf_set_options", 00:18:19.028 "params": { 00:18:19.028 "small_pool_count": 8192, 00:18:19.028 "large_pool_count": 1024, 00:18:19.028 "small_bufsize": 8192, 00:18:19.028 "large_bufsize": 135168, 00:18:19.028 "enable_numa": false 00:18:19.028 } 00:18:19.028 } 00:18:19.028 ] 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "subsystem": "sock", 00:18:19.028 "config": [ 00:18:19.028 { 00:18:19.028 "method": "sock_set_default_impl", 00:18:19.028 "params": { 00:18:19.028 "impl_name": "posix" 00:18:19.028 } 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "method": "sock_impl_set_options", 00:18:19.028 "params": { 00:18:19.028 "impl_name": "ssl", 00:18:19.028 "recv_buf_size": 4096, 00:18:19.028 "send_buf_size": 4096, 00:18:19.028 "enable_recv_pipe": true, 00:18:19.028 "enable_quickack": false, 00:18:19.028 "enable_placement_id": 0, 00:18:19.028 "enable_zerocopy_send_server": true, 00:18:19.028 "enable_zerocopy_send_client": false, 00:18:19.028 "zerocopy_threshold": 0, 00:18:19.028 "tls_version": 0, 00:18:19.028 "enable_ktls": false 00:18:19.028 } 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "method": "sock_impl_set_options", 00:18:19.028 "params": { 00:18:19.028 "impl_name": "posix", 00:18:19.028 "recv_buf_size": 2097152, 00:18:19.028 "send_buf_size": 2097152, 00:18:19.028 "enable_recv_pipe": true, 00:18:19.028 "enable_quickack": false, 00:18:19.028 "enable_placement_id": 0, 00:18:19.028 "enable_zerocopy_send_server": true, 00:18:19.028 "enable_zerocopy_send_client": false, 00:18:19.028 "zerocopy_threshold": 0, 00:18:19.028 "tls_version": 0, 00:18:19.028 "enable_ktls": false 00:18:19.028 } 00:18:19.028 } 00:18:19.028 ] 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "subsystem": "vmd", 00:18:19.028 "config": [] 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "subsystem": "accel", 00:18:19.028 "config": [ 00:18:19.028 { 00:18:19.028 "method": "accel_set_options", 00:18:19.028 "params": { 00:18:19.028 "small_cache_size": 128, 00:18:19.028 "large_cache_size": 16, 00:18:19.028 "task_count": 2048, 00:18:19.028 "sequence_count": 2048, 00:18:19.028 "buf_count": 2048 00:18:19.028 } 00:18:19.028 } 00:18:19.028 ] 00:18:19.028 }, 00:18:19.028 { 00:18:19.028 "subsystem": "bdev", 00:18:19.028 "config": [ 00:18:19.029 { 00:18:19.029 "method": "bdev_set_options", 00:18:19.029 "params": { 00:18:19.029 "bdev_io_pool_size": 65535, 00:18:19.029 "bdev_io_cache_size": 256, 00:18:19.029 "bdev_auto_examine": true, 00:18:19.029 "iobuf_small_cache_size": 128, 00:18:19.029 "iobuf_large_cache_size": 16 00:18:19.029 } 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "method": "bdev_raid_set_options", 00:18:19.029 "params": { 00:18:19.029 "process_window_size_kb": 1024, 00:18:19.029 "process_max_bandwidth_mb_sec": 0 00:18:19.029 } 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "method": "bdev_iscsi_set_options", 00:18:19.029 "params": { 00:18:19.029 "timeout_sec": 30 00:18:19.029 } 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "method": "bdev_nvme_set_options", 00:18:19.029 "params": { 00:18:19.029 "action_on_timeout": "none", 00:18:19.029 "timeout_us": 0, 00:18:19.029 "timeout_admin_us": 0, 00:18:19.029 "keep_alive_timeout_ms": 10000, 00:18:19.029 "arbitration_burst": 0, 00:18:19.029 "low_priority_weight": 0, 00:18:19.029 "medium_priority_weight": 0, 00:18:19.029 "high_priority_weight": 0, 00:18:19.029 "nvme_adminq_poll_period_us": 10000, 00:18:19.029 "nvme_ioq_poll_period_us": 0, 00:18:19.029 "io_queue_requests": 512, 00:18:19.029 "delay_cmd_submit": true, 00:18:19.029 "transport_retry_count": 4, 00:18:19.029 "bdev_retry_count": 3, 00:18:19.029 "transport_ack_timeout": 0, 00:18:19.029 "ctrlr_loss_timeout_sec": 0, 00:18:19.029 "reconnect_delay_sec": 0, 00:18:19.029 "fast_io_fail_timeout_sec": 0, 00:18:19.029 "disable_auto_failback": false, 00:18:19.029 "generate_uuids": false, 00:18:19.029 "transport_tos": 0, 00:18:19.029 "nvme_error_stat": false, 00:18:19.029 "rdma_srq_size": 0, 00:18:19.029 "io_path_stat": false, 00:18:19.029 "allow_accel_sequence": false, 00:18:19.029 "rdma_max_cq_size": 0, 00:18:19.029 "rdma_cm_event_timeout_ms": 0, 00:18:19.029 "dhchap_digests": [ 00:18:19.029 "sha256", 00:18:19.029 "sha384", 00:18:19.029 "sha512" 00:18:19.029 ], 00:18:19.029 "dhchap_dhgroups": [ 00:18:19.029 "null", 00:18:19.029 "ffdhe2048", 00:18:19.029 "ffdhe3072", 00:18:19.029 "ffdhe4096", 00:18:19.029 "ffdhe6144", 00:18:19.029 "ffdhe8192" 00:18:19.029 ] 00:18:19.029 } 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "method": "bdev_nvme_attach_controller", 00:18:19.029 "params": { 00:18:19.029 "name": "TLSTEST", 00:18:19.029 "trtype": "TCP", 00:18:19.029 "adrfam": "IPv4", 00:18:19.029 "traddr": "10.0.0.2", 00:18:19.029 "trsvcid": "4420", 00:18:19.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.029 "prchk_reftag": false, 00:18:19.029 "prchk_guard": false, 00:18:19.029 "ctrlr_loss_timeout_sec": 0, 00:18:19.029 "reconnect_delay_sec": 0, 00:18:19.029 "fast_io_fail_timeout_sec": 0, 00:18:19.029 "psk": "key0", 00:18:19.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.029 "hdgst": false, 00:18:19.029 "ddgst": false, 00:18:19.029 "multipath": "multipath" 00:18:19.029 } 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "method": "bdev_nvme_set_hotplug", 00:18:19.029 "params": { 00:18:19.029 "period_us": 100000, 00:18:19.029 "enable": false 00:18:19.029 } 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "method": "bdev_wait_for_examine" 00:18:19.029 } 00:18:19.029 ] 00:18:19.029 }, 00:18:19.029 { 00:18:19.029 "subsystem": "nbd", 00:18:19.029 "config": [] 00:18:19.029 } 00:18:19.029 ] 00:18:19.029 }' 00:18:19.029 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3476557 00:18:19.029 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3476557 ']' 00:18:19.029 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3476557 00:18:19.029 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.029 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.029 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476557 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476557' 00:18:19.289 killing process with pid 3476557 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3476557 00:18:19.289 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.289 00:18:19.289 Latency(us) 00:18:19.289 [2024-11-19T16:36:21.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.289 [2024-11-19T16:36:21.512Z] =================================================================================================================== 00:18:19.289 [2024-11-19T16:36:21.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3476557 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3476251 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3476251 ']' 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3476251 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476251 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476251' 00:18:19.289 killing process with pid 3476251 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3476251 00:18:19.289 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3476251 00:18:19.549 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:19.549 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.549 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.549 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:19.549 "subsystems": [ 00:18:19.549 { 00:18:19.549 "subsystem": "keyring", 00:18:19.549 "config": [ 00:18:19.549 { 00:18:19.549 "method": "keyring_file_add_key", 00:18:19.549 "params": { 00:18:19.549 "name": "key0", 00:18:19.549 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:19.549 } 00:18:19.549 } 00:18:19.549 ] 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "subsystem": "iobuf", 00:18:19.549 "config": [ 00:18:19.549 { 00:18:19.549 "method": "iobuf_set_options", 00:18:19.549 "params": { 00:18:19.549 "small_pool_count": 8192, 00:18:19.549 "large_pool_count": 1024, 00:18:19.549 "small_bufsize": 8192, 00:18:19.549 "large_bufsize": 135168, 00:18:19.549 "enable_numa": false 00:18:19.549 } 00:18:19.549 } 00:18:19.549 ] 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "subsystem": "sock", 00:18:19.549 "config": [ 00:18:19.549 { 00:18:19.549 "method": "sock_set_default_impl", 00:18:19.549 "params": { 00:18:19.549 "impl_name": "posix" 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "sock_impl_set_options", 00:18:19.549 "params": { 00:18:19.549 "impl_name": "ssl", 00:18:19.549 "recv_buf_size": 4096, 00:18:19.549 "send_buf_size": 4096, 00:18:19.549 "enable_recv_pipe": true, 00:18:19.549 "enable_quickack": false, 00:18:19.549 "enable_placement_id": 0, 00:18:19.549 "enable_zerocopy_send_server": true, 00:18:19.549 "enable_zerocopy_send_client": false, 00:18:19.549 "zerocopy_threshold": 0, 00:18:19.549 "tls_version": 0, 00:18:19.549 "enable_ktls": false 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "sock_impl_set_options", 00:18:19.549 "params": { 00:18:19.549 "impl_name": "posix", 00:18:19.549 "recv_buf_size": 2097152, 00:18:19.549 "send_buf_size": 2097152, 00:18:19.549 "enable_recv_pipe": true, 00:18:19.549 "enable_quickack": false, 00:18:19.549 "enable_placement_id": 0, 00:18:19.549 "enable_zerocopy_send_server": true, 00:18:19.549 "enable_zerocopy_send_client": false, 00:18:19.549 "zerocopy_threshold": 0, 00:18:19.549 "tls_version": 0, 00:18:19.549 "enable_ktls": false 00:18:19.549 } 00:18:19.549 } 00:18:19.549 ] 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "subsystem": "vmd", 00:18:19.549 "config": [] 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "subsystem": "accel", 00:18:19.549 "config": [ 00:18:19.549 { 00:18:19.549 "method": "accel_set_options", 00:18:19.549 "params": { 00:18:19.549 "small_cache_size": 128, 00:18:19.549 "large_cache_size": 16, 00:18:19.549 "task_count": 2048, 00:18:19.549 "sequence_count": 2048, 00:18:19.549 "buf_count": 2048 00:18:19.549 } 00:18:19.549 } 00:18:19.549 ] 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "subsystem": "bdev", 00:18:19.549 "config": [ 00:18:19.549 { 00:18:19.549 "method": "bdev_set_options", 00:18:19.549 "params": { 00:18:19.549 "bdev_io_pool_size": 65535, 00:18:19.549 "bdev_io_cache_size": 256, 00:18:19.549 "bdev_auto_examine": true, 00:18:19.549 "iobuf_small_cache_size": 128, 00:18:19.549 "iobuf_large_cache_size": 16 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "bdev_raid_set_options", 00:18:19.549 "params": { 00:18:19.549 "process_window_size_kb": 1024, 00:18:19.549 "process_max_bandwidth_mb_sec": 0 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "bdev_iscsi_set_options", 00:18:19.549 "params": { 00:18:19.549 "timeout_sec": 30 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "bdev_nvme_set_options", 00:18:19.549 "params": { 00:18:19.549 "action_on_timeout": "none", 00:18:19.549 "timeout_us": 0, 00:18:19.549 "timeout_admin_us": 0, 00:18:19.549 "keep_alive_timeout_ms": 10000, 00:18:19.549 "arbitration_burst": 0, 00:18:19.549 "low_priority_weight": 0, 00:18:19.549 "medium_priority_weight": 0, 00:18:19.549 "high_priority_weight": 0, 00:18:19.549 "nvme_adminq_poll_period_us": 10000, 00:18:19.549 "nvme_ioq_poll_period_us": 0, 00:18:19.549 "io_queue_requests": 0, 00:18:19.549 "delay_cmd_submit": true, 00:18:19.549 "transport_retry_count": 4, 00:18:19.549 "bdev_retry_count": 3, 00:18:19.549 "transport_ack_timeout": 0, 00:18:19.549 "ctrlr_loss_timeout_sec": 0, 00:18:19.549 "reconnect_delay_sec": 0, 00:18:19.549 "fast_io_fail_timeout_sec": 0, 00:18:19.549 "disable_auto_failback": false, 00:18:19.549 "generate_uuids": false, 00:18:19.549 "transport_tos": 0, 00:18:19.549 "nvme_error_stat": false, 00:18:19.549 "rdma_srq_size": 0, 00:18:19.549 "io_path_stat": false, 00:18:19.549 "allow_accel_sequence": false, 00:18:19.549 "rdma_max_cq_size": 0, 00:18:19.549 "rdma_cm_event_timeout_ms": 0, 00:18:19.549 "dhchap_digests": [ 00:18:19.549 "sha256", 00:18:19.549 "sha384", 00:18:19.549 "sha512" 00:18:19.549 ], 00:18:19.549 "dhchap_dhgroups": [ 00:18:19.549 "null", 00:18:19.549 "ffdhe2048", 00:18:19.549 "ffdhe3072", 00:18:19.549 "ffdhe4096", 00:18:19.549 "ffdhe6144", 00:18:19.549 "ffdhe8192" 00:18:19.549 ] 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "bdev_nvme_set_hotplug", 00:18:19.549 "params": { 00:18:19.549 "period_us": 100000, 00:18:19.549 "enable": false 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "bdev_malloc_create", 00:18:19.549 "params": { 00:18:19.549 "name": "malloc0", 00:18:19.549 "num_blocks": 8192, 00:18:19.549 "block_size": 4096, 00:18:19.549 "physical_block_size": 4096, 00:18:19.549 "uuid": "0141cbbb-fbd9-40d1-bdbb-e5ae9ac1f59e", 00:18:19.549 "optimal_io_boundary": 0, 00:18:19.549 "md_size": 0, 00:18:19.549 "dif_type": 0, 00:18:19.549 "dif_is_head_of_md": false, 00:18:19.549 "dif_pi_format": 0 00:18:19.549 } 00:18:19.549 }, 00:18:19.549 { 00:18:19.549 "method": "bdev_wait_for_examine" 00:18:19.550 } 00:18:19.550 ] 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "subsystem": "nbd", 00:18:19.550 "config": [] 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "subsystem": "scheduler", 00:18:19.550 "config": [ 00:18:19.550 { 00:18:19.550 "method": "framework_set_scheduler", 00:18:19.550 "params": { 00:18:19.550 "name": "static" 00:18:19.550 } 00:18:19.550 } 00:18:19.550 ] 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "subsystem": "nvmf", 00:18:19.550 "config": [ 00:18:19.550 { 00:18:19.550 "method": "nvmf_set_config", 00:18:19.550 "params": { 00:18:19.550 "discovery_filter": "match_any", 00:18:19.550 "admin_cmd_passthru": { 00:18:19.550 "identify_ctrlr": false 00:18:19.550 }, 00:18:19.550 "dhchap_digests": [ 00:18:19.550 "sha256", 00:18:19.550 "sha384", 00:18:19.550 "sha512" 00:18:19.550 ], 00:18:19.550 "dhchap_dhgroups": [ 00:18:19.550 "null", 00:18:19.550 "ffdhe2048", 00:18:19.550 "ffdhe3072", 00:18:19.550 "ffdhe4096", 00:18:19.550 "ffdhe6144", 00:18:19.550 "ffdhe8192" 00:18:19.550 ] 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_set_max_subsystems", 00:18:19.550 "params": { 00:18:19.550 "max_subsystems": 1024 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_set_crdt", 00:18:19.550 "params": { 00:18:19.550 "crdt1": 0, 00:18:19.550 "crdt2": 0, 00:18:19.550 "crdt3": 0 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_create_transport", 00:18:19.550 "params": { 00:18:19.550 "trtype": "TCP", 00:18:19.550 "max_queue_depth": 128, 00:18:19.550 "max_io_qpairs_per_ctrlr": 127, 00:18:19.550 "in_capsule_data_size": 4096, 00:18:19.550 "max_io_size": 131072, 00:18:19.550 "io_unit_size": 131072, 00:18:19.550 "max_aq_depth": 128, 00:18:19.550 "num_shared_buffers": 511, 00:18:19.550 "buf_cache_size": 4294967295, 00:18:19.550 "dif_insert_or_strip": false, 00:18:19.550 "zcopy": false, 00:18:19.550 "c2h_success": false, 00:18:19.550 "sock_priority": 0, 00:18:19.550 "abort_timeout_sec": 1, 00:18:19.550 "ack_timeout": 0, 00:18:19.550 "data_wr_pool_size": 0 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_create_subsystem", 00:18:19.550 "params": { 00:18:19.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.550 "allow_any_host": false, 00:18:19.550 "serial_number": "SPDK00000000000001", 00:18:19.550 "model_number": "SPDK bdev Controller", 00:18:19.550 "max_namespaces": 10, 00:18:19.550 "min_cntlid": 1, 00:18:19.550 "max_cntlid": 65519, 00:18:19.550 "ana_reporting": false 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_subsystem_add_host", 00:18:19.550 "params": { 00:18:19.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.550 "host": "nqn.2016-06.io.spdk:host1", 00:18:19.550 "psk": "key0" 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_subsystem_add_ns", 00:18:19.550 "params": { 00:18:19.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.550 "namespace": { 00:18:19.550 "nsid": 1, 00:18:19.550 "bdev_name": "malloc0", 00:18:19.550 "nguid": "0141CBBBFBD940D1BDBBE5AE9AC1F59E", 00:18:19.550 "uuid": "0141cbbb-fbd9-40d1-bdbb-e5ae9ac1f59e", 00:18:19.550 "no_auto_visible": false 00:18:19.550 } 00:18:19.550 } 00:18:19.550 }, 00:18:19.550 { 00:18:19.550 "method": "nvmf_subsystem_add_listener", 00:18:19.550 "params": { 00:18:19.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.550 "listen_address": { 00:18:19.550 "trtype": "TCP", 00:18:19.550 "adrfam": "IPv4", 00:18:19.550 "traddr": "10.0.0.2", 00:18:19.550 "trsvcid": "4420" 00:18:19.550 }, 00:18:19.550 "secure_channel": true 00:18:19.550 } 00:18:19.550 } 00:18:19.550 ] 00:18:19.550 } 00:18:19.550 ] 00:18:19.550 }' 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3476804 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3476804 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3476804 ']' 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.550 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.550 [2024-11-19 17:36:21.693418] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:19.550 [2024-11-19 17:36:21.693466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.810 [2024-11-19 17:36:21.773608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.810 [2024-11-19 17:36:21.813738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.810 [2024-11-19 17:36:21.813773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.810 [2024-11-19 17:36:21.813781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.810 [2024-11-19 17:36:21.813787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.810 [2024-11-19 17:36:21.813792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.810 [2024-11-19 17:36:21.814366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.810 [2024-11-19 17:36:22.025968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.068 [2024-11-19 17:36:22.057990] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.069 [2024-11-19 17:36:22.058186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.328 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.328 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.328 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.328 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.328 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3477034 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3477034 /var/tmp/bdevperf.sock 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3477034 ']' 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.589 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:20.589 "subsystems": [ 00:18:20.589 { 00:18:20.589 "subsystem": "keyring", 00:18:20.589 "config": [ 00:18:20.589 { 00:18:20.589 "method": "keyring_file_add_key", 00:18:20.589 "params": { 00:18:20.589 "name": "key0", 00:18:20.589 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:20.589 } 00:18:20.589 } 00:18:20.589 ] 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "subsystem": "iobuf", 00:18:20.589 "config": [ 00:18:20.589 { 00:18:20.589 "method": "iobuf_set_options", 00:18:20.589 "params": { 00:18:20.589 "small_pool_count": 8192, 00:18:20.589 "large_pool_count": 1024, 00:18:20.589 "small_bufsize": 8192, 00:18:20.589 "large_bufsize": 135168, 00:18:20.589 "enable_numa": false 00:18:20.589 } 00:18:20.589 } 00:18:20.589 ] 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "subsystem": "sock", 00:18:20.589 "config": [ 00:18:20.589 { 00:18:20.589 "method": "sock_set_default_impl", 00:18:20.589 "params": { 00:18:20.589 "impl_name": "posix" 00:18:20.589 } 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "method": "sock_impl_set_options", 00:18:20.589 "params": { 00:18:20.589 "impl_name": "ssl", 00:18:20.589 "recv_buf_size": 4096, 00:18:20.589 "send_buf_size": 4096, 00:18:20.589 "enable_recv_pipe": true, 00:18:20.589 "enable_quickack": false, 00:18:20.589 "enable_placement_id": 0, 00:18:20.589 "enable_zerocopy_send_server": true, 00:18:20.589 "enable_zerocopy_send_client": false, 00:18:20.589 "zerocopy_threshold": 0, 00:18:20.589 "tls_version": 0, 00:18:20.589 "enable_ktls": false 00:18:20.589 } 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "method": "sock_impl_set_options", 00:18:20.589 "params": { 00:18:20.589 "impl_name": "posix", 00:18:20.589 "recv_buf_size": 2097152, 00:18:20.589 "send_buf_size": 2097152, 00:18:20.589 "enable_recv_pipe": true, 00:18:20.589 "enable_quickack": false, 00:18:20.589 "enable_placement_id": 0, 00:18:20.589 "enable_zerocopy_send_server": true, 00:18:20.589 "enable_zerocopy_send_client": false, 00:18:20.589 "zerocopy_threshold": 0, 00:18:20.589 "tls_version": 0, 00:18:20.589 "enable_ktls": false 00:18:20.589 } 00:18:20.589 } 00:18:20.589 ] 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "subsystem": "vmd", 00:18:20.589 "config": [] 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "subsystem": "accel", 00:18:20.589 "config": [ 00:18:20.589 { 00:18:20.589 "method": "accel_set_options", 00:18:20.589 "params": { 00:18:20.589 "small_cache_size": 128, 00:18:20.589 "large_cache_size": 16, 00:18:20.589 "task_count": 2048, 00:18:20.589 "sequence_count": 2048, 00:18:20.589 "buf_count": 2048 00:18:20.589 } 00:18:20.589 } 00:18:20.589 ] 00:18:20.589 }, 00:18:20.589 { 00:18:20.589 "subsystem": "bdev", 00:18:20.589 "config": [ 00:18:20.589 { 00:18:20.589 "method": "bdev_set_options", 00:18:20.589 "params": { 00:18:20.589 "bdev_io_pool_size": 65535, 00:18:20.589 "bdev_io_cache_size": 256, 00:18:20.589 "bdev_auto_examine": true, 00:18:20.589 "iobuf_small_cache_size": 128, 00:18:20.589 "iobuf_large_cache_size": 16 00:18:20.589 } 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "method": "bdev_raid_set_options", 00:18:20.590 "params": { 00:18:20.590 "process_window_size_kb": 1024, 00:18:20.590 "process_max_bandwidth_mb_sec": 0 00:18:20.590 } 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "method": "bdev_iscsi_set_options", 00:18:20.590 "params": { 00:18:20.590 "timeout_sec": 30 00:18:20.590 } 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "method": "bdev_nvme_set_options", 00:18:20.590 "params": { 00:18:20.590 "action_on_timeout": "none", 00:18:20.590 "timeout_us": 0, 00:18:20.590 "timeout_admin_us": 0, 00:18:20.590 "keep_alive_timeout_ms": 10000, 00:18:20.590 "arbitration_burst": 0, 00:18:20.590 "low_priority_weight": 0, 00:18:20.590 "medium_priority_weight": 0, 00:18:20.590 "high_priority_weight": 0, 00:18:20.590 "nvme_adminq_poll_period_us": 10000, 00:18:20.590 "nvme_ioq_poll_period_us": 0, 00:18:20.590 "io_queue_requests": 512, 00:18:20.590 "delay_cmd_submit": true, 00:18:20.590 "transport_retry_count": 4, 00:18:20.590 "bdev_retry_count": 3, 00:18:20.590 "transport_ack_timeout": 0, 00:18:20.590 "ctrlr_loss_timeout_sec": 0, 00:18:20.590 "reconnect_delay_sec": 0, 00:18:20.590 "fast_io_fail_timeout_sec": 0, 00:18:20.590 "disable_auto_failback": false, 00:18:20.590 "generate_uuids": false, 00:18:20.590 "transport_tos": 0, 00:18:20.590 "nvme_error_stat": false, 00:18:20.590 "rdma_srq_size": 0, 00:18:20.590 "io_path_stat": false, 00:18:20.590 "allow_accel_sequence": false, 00:18:20.590 "rdma_max_cq_size": 0, 00:18:20.590 "rdma_cm_event_timeout_ms": 0, 00:18:20.590 "dhchap_digests": [ 00:18:20.590 "sha256", 00:18:20.590 "sha384", 00:18:20.590 "sha512" 00:18:20.590 ], 00:18:20.590 "dhchap_dhgroups": [ 00:18:20.590 "null", 00:18:20.590 "ffdhe2048", 00:18:20.590 "ffdhe3072", 00:18:20.590 "ffdhe4096", 00:18:20.590 "ffdhe6144", 00:18:20.590 "ffdhe8192" 00:18:20.590 ] 00:18:20.590 } 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "method": "bdev_nvme_attach_controller", 00:18:20.590 "params": { 00:18:20.590 "name": "TLSTEST", 00:18:20.590 "trtype": "TCP", 00:18:20.590 "adrfam": "IPv4", 00:18:20.590 "traddr": "10.0.0.2", 00:18:20.590 "trsvcid": "4420", 00:18:20.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.590 "prchk_reftag": false, 00:18:20.590 "prchk_guard": false, 00:18:20.590 "ctrlr_loss_timeout_sec": 0, 00:18:20.590 "reconnect_delay_sec": 0, 00:18:20.590 "fast_io_fail_timeout_sec": 0, 00:18:20.590 "psk": "key0", 00:18:20.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.590 "hdgst": false, 00:18:20.590 "ddgst": false, 00:18:20.590 "multipath": "multipath" 00:18:20.590 } 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "method": "bdev_nvme_set_hotplug", 00:18:20.590 "params": { 00:18:20.590 "period_us": 100000, 00:18:20.590 "enable": false 00:18:20.590 } 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "method": "bdev_wait_for_examine" 00:18:20.590 } 00:18:20.590 ] 00:18:20.590 }, 00:18:20.590 { 00:18:20.590 "subsystem": "nbd", 00:18:20.590 "config": [] 00:18:20.590 } 00:18:20.590 ] 00:18:20.590 }' 00:18:20.590 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.590 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.590 [2024-11-19 17:36:22.602373] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:20.590 [2024-11-19 17:36:22.602420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477034 ] 00:18:20.590 [2024-11-19 17:36:22.675805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.590 [2024-11-19 17:36:22.718009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.849 [2024-11-19 17:36:22.870887] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.416 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.416 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:21.416 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.416 Running I/O for 10 seconds... 00:18:23.731 5277.00 IOPS, 20.61 MiB/s [2024-11-19T16:36:26.891Z] 5383.50 IOPS, 21.03 MiB/s [2024-11-19T16:36:27.827Z] 5411.67 IOPS, 21.14 MiB/s [2024-11-19T16:36:28.764Z] 5333.25 IOPS, 20.83 MiB/s [2024-11-19T16:36:29.822Z] 5371.40 IOPS, 20.98 MiB/s [2024-11-19T16:36:30.773Z] 5358.33 IOPS, 20.93 MiB/s [2024-11-19T16:36:31.708Z] 5352.57 IOPS, 20.91 MiB/s [2024-11-19T16:36:32.643Z] 5365.25 IOPS, 20.96 MiB/s [2024-11-19T16:36:33.579Z] 5381.67 IOPS, 21.02 MiB/s [2024-11-19T16:36:33.579Z] 5394.50 IOPS, 21.07 MiB/s 00:18:31.356 Latency(us) 00:18:31.356 [2024-11-19T16:36:33.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.356 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.356 Verification LBA range: start 0x0 length 0x2000 00:18:31.356 TLSTESTn1 : 10.02 5397.62 21.08 0.00 0.00 23675.94 7693.36 23137.06 00:18:31.356 [2024-11-19T16:36:33.579Z] =================================================================================================================== 00:18:31.356 [2024-11-19T16:36:33.579Z] Total : 5397.62 21.08 0.00 0.00 23675.94 7693.36 23137.06 00:18:31.356 { 00:18:31.356 "results": [ 00:18:31.356 { 00:18:31.356 "job": "TLSTESTn1", 00:18:31.356 "core_mask": "0x4", 00:18:31.356 "workload": "verify", 00:18:31.356 "status": "finished", 00:18:31.356 "verify_range": { 00:18:31.356 "start": 0, 00:18:31.356 "length": 8192 00:18:31.356 }, 00:18:31.356 "queue_depth": 128, 00:18:31.356 "io_size": 4096, 00:18:31.356 "runtime": 10.017564, 00:18:31.356 "iops": 5397.619620897855, 00:18:31.356 "mibps": 21.084451644132248, 00:18:31.356 "io_failed": 0, 00:18:31.356 "io_timeout": 0, 00:18:31.356 "avg_latency_us": 23675.94494491542, 00:18:31.356 "min_latency_us": 7693.356521739131, 00:18:31.356 "max_latency_us": 23137.057391304348 00:18:31.356 } 00:18:31.356 ], 00:18:31.356 "core_count": 1 00:18:31.356 } 00:18:31.614 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.614 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3477034 00:18:31.614 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3477034 ']' 00:18:31.614 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3477034 00:18:31.614 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477034 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477034' 00:18:31.615 killing process with pid 3477034 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3477034 00:18:31.615 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.615 00:18:31.615 Latency(us) 00:18:31.615 [2024-11-19T16:36:33.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.615 [2024-11-19T16:36:33.838Z] =================================================================================================================== 00:18:31.615 [2024-11-19T16:36:33.838Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3477034 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3476804 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3476804 ']' 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3476804 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.615 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476804 00:18:31.874 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.874 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.874 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476804' 00:18:31.874 killing process with pid 3476804 00:18:31.874 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3476804 00:18:31.874 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3476804 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3478887 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3478887 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3478887 ']' 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.874 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.874 [2024-11-19 17:36:34.079024] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:31.874 [2024-11-19 17:36:34.079074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.134 [2024-11-19 17:36:34.160610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.134 [2024-11-19 17:36:34.202136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.134 [2024-11-19 17:36:34.202172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.134 [2024-11-19 17:36:34.202179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.134 [2024-11-19 17:36:34.202185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.134 [2024-11-19 17:36:34.202189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.134 [2024-11-19 17:36:34.202756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.701 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.701 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.701 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.701 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.701 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.959 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.959 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.e8Dh19KBkV 00:18:32.959 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.e8Dh19KBkV 00:18:32.959 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.959 [2024-11-19 17:36:35.120912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.959 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.216 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:33.473 [2024-11-19 17:36:35.533966] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.473 [2024-11-19 17:36:35.534193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.473 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.732 malloc0 00:18:33.732 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.990 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:33.990 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3479169 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3479169 /var/tmp/bdevperf.sock 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3479169 ']' 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.248 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.248 [2024-11-19 17:36:36.401774] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:34.248 [2024-11-19 17:36:36.401821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479169 ] 00:18:34.506 [2024-11-19 17:36:36.477829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.506 [2024-11-19 17:36:36.521101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.506 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.506 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.506 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:34.764 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:35.022 [2024-11-19 17:36:37.001842] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.022 nvme0n1 00:18:35.022 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.022 Running I/O for 1 seconds... 00:18:36.398 5372.00 IOPS, 20.98 MiB/s 00:18:36.398 Latency(us) 00:18:36.398 [2024-11-19T16:36:38.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.398 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.398 Verification LBA range: start 0x0 length 0x2000 00:18:36.398 nvme0n1 : 1.01 5427.20 21.20 0.00 0.00 23423.10 5043.42 22225.25 00:18:36.398 [2024-11-19T16:36:38.621Z] =================================================================================================================== 00:18:36.398 [2024-11-19T16:36:38.621Z] Total : 5427.20 21.20 0.00 0.00 23423.10 5043.42 22225.25 00:18:36.398 { 00:18:36.398 "results": [ 00:18:36.398 { 00:18:36.398 "job": "nvme0n1", 00:18:36.398 "core_mask": "0x2", 00:18:36.398 "workload": "verify", 00:18:36.398 "status": "finished", 00:18:36.398 "verify_range": { 00:18:36.398 "start": 0, 00:18:36.398 "length": 8192 00:18:36.398 }, 00:18:36.398 "queue_depth": 128, 00:18:36.398 "io_size": 4096, 00:18:36.398 "runtime": 1.013413, 00:18:36.398 "iops": 5427.204900667349, 00:18:36.398 "mibps": 21.20001914323183, 00:18:36.398 "io_failed": 0, 00:18:36.398 "io_timeout": 0, 00:18:36.398 "avg_latency_us": 23423.100013280637, 00:18:36.398 "min_latency_us": 5043.422608695652, 00:18:36.398 "max_latency_us": 22225.252173913042 00:18:36.398 } 00:18:36.398 ], 00:18:36.398 "core_count": 1 00:18:36.398 } 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3479169 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3479169 ']' 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3479169 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3479169 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3479169' 00:18:36.399 killing process with pid 3479169 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3479169 00:18:36.399 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.399 00:18:36.399 Latency(us) 00:18:36.399 [2024-11-19T16:36:38.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.399 [2024-11-19T16:36:38.622Z] =================================================================================================================== 00:18:36.399 [2024-11-19T16:36:38.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3479169 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3478887 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3478887 ']' 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3478887 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3478887 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3478887' 00:18:36.399 killing process with pid 3478887 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3478887 00:18:36.399 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3478887 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3479630 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3479630 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3479630 ']' 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.657 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.657 [2024-11-19 17:36:38.703677] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:36.657 [2024-11-19 17:36:38.703724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.657 [2024-11-19 17:36:38.783367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.657 [2024-11-19 17:36:38.823807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.657 [2024-11-19 17:36:38.823843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.657 [2024-11-19 17:36:38.823851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.657 [2024-11-19 17:36:38.823857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.657 [2024-11-19 17:36:38.823862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.657 [2024-11-19 17:36:38.824463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.916 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 [2024-11-19 17:36:38.960722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.916 malloc0 00:18:36.916 [2024-11-19 17:36:38.989042] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.916 [2024-11-19 17:36:38.989249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3479653 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3479653 /var/tmp/bdevperf.sock 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3479653 ']' 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.916 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 [2024-11-19 17:36:39.063048] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:36.916 [2024-11-19 17:36:39.063089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479653 ] 00:18:37.174 [2024-11-19 17:36:39.136389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.174 [2024-11-19 17:36:39.177428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.174 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.174 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.174 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.e8Dh19KBkV 00:18:37.432 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:37.691 [2024-11-19 17:36:39.665458] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.691 nvme0n1 00:18:37.691 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.691 Running I/O for 1 seconds... 00:18:38.885 5078.00 IOPS, 19.84 MiB/s 00:18:38.885 Latency(us) 00:18:38.885 [2024-11-19T16:36:41.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.885 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:38.885 Verification LBA range: start 0x0 length 0x2000 00:18:38.885 nvme0n1 : 1.02 5127.82 20.03 0.00 0.00 24790.45 5983.72 37384.01 00:18:38.885 [2024-11-19T16:36:41.108Z] =================================================================================================================== 00:18:38.885 [2024-11-19T16:36:41.108Z] Total : 5127.82 20.03 0.00 0.00 24790.45 5983.72 37384.01 00:18:38.885 { 00:18:38.885 "results": [ 00:18:38.885 { 00:18:38.885 "job": "nvme0n1", 00:18:38.885 "core_mask": "0x2", 00:18:38.886 "workload": "verify", 00:18:38.886 "status": "finished", 00:18:38.886 "verify_range": { 00:18:38.886 "start": 0, 00:18:38.886 "length": 8192 00:18:38.886 }, 00:18:38.886 "queue_depth": 128, 00:18:38.886 "io_size": 4096, 00:18:38.886 "runtime": 1.015246, 00:18:38.886 "iops": 5127.82123741438, 00:18:38.886 "mibps": 20.030551708649924, 00:18:38.886 "io_failed": 0, 00:18:38.886 "io_timeout": 0, 00:18:38.886 "avg_latency_us": 24790.45145100135, 00:18:38.886 "min_latency_us": 5983.721739130435, 00:18:38.886 "max_latency_us": 37384.013913043476 00:18:38.886 } 00:18:38.886 ], 00:18:38.886 "core_count": 1 00:18:38.886 } 00:18:38.886 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:38.886 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.886 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.886 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.886 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:38.886 "subsystems": [ 00:18:38.886 { 00:18:38.886 "subsystem": "keyring", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "keyring_file_add_key", 00:18:38.886 "params": { 00:18:38.886 "name": "key0", 00:18:38.886 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "iobuf", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "iobuf_set_options", 00:18:38.886 "params": { 00:18:38.886 "small_pool_count": 8192, 00:18:38.886 "large_pool_count": 1024, 00:18:38.886 "small_bufsize": 8192, 00:18:38.886 "large_bufsize": 135168, 00:18:38.886 "enable_numa": false 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "sock", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "sock_set_default_impl", 00:18:38.886 "params": { 00:18:38.886 "impl_name": "posix" 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "sock_impl_set_options", 00:18:38.886 "params": { 00:18:38.886 "impl_name": "ssl", 00:18:38.886 "recv_buf_size": 4096, 00:18:38.886 "send_buf_size": 4096, 00:18:38.886 "enable_recv_pipe": true, 00:18:38.886 "enable_quickack": false, 00:18:38.886 "enable_placement_id": 0, 00:18:38.886 "enable_zerocopy_send_server": true, 00:18:38.886 "enable_zerocopy_send_client": false, 00:18:38.886 "zerocopy_threshold": 0, 00:18:38.886 "tls_version": 0, 00:18:38.886 "enable_ktls": false 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "sock_impl_set_options", 00:18:38.886 "params": { 00:18:38.886 "impl_name": "posix", 00:18:38.886 "recv_buf_size": 2097152, 00:18:38.886 "send_buf_size": 2097152, 00:18:38.886 "enable_recv_pipe": true, 00:18:38.886 "enable_quickack": false, 00:18:38.886 "enable_placement_id": 0, 00:18:38.886 "enable_zerocopy_send_server": true, 00:18:38.886 "enable_zerocopy_send_client": false, 00:18:38.886 "zerocopy_threshold": 0, 00:18:38.886 "tls_version": 0, 00:18:38.886 "enable_ktls": false 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "vmd", 00:18:38.886 "config": [] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "accel", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "accel_set_options", 00:18:38.886 "params": { 00:18:38.886 "small_cache_size": 128, 00:18:38.886 "large_cache_size": 16, 00:18:38.886 "task_count": 2048, 00:18:38.886 "sequence_count": 2048, 00:18:38.886 "buf_count": 2048 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "bdev", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "bdev_set_options", 00:18:38.886 "params": { 00:18:38.886 "bdev_io_pool_size": 65535, 00:18:38.886 "bdev_io_cache_size": 256, 00:18:38.886 "bdev_auto_examine": true, 00:18:38.886 "iobuf_small_cache_size": 128, 00:18:38.886 "iobuf_large_cache_size": 16 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "bdev_raid_set_options", 00:18:38.886 "params": { 00:18:38.886 "process_window_size_kb": 1024, 00:18:38.886 "process_max_bandwidth_mb_sec": 0 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "bdev_iscsi_set_options", 00:18:38.886 "params": { 00:18:38.886 "timeout_sec": 30 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "bdev_nvme_set_options", 00:18:38.886 "params": { 00:18:38.886 "action_on_timeout": "none", 00:18:38.886 "timeout_us": 0, 00:18:38.886 "timeout_admin_us": 0, 00:18:38.886 "keep_alive_timeout_ms": 10000, 00:18:38.886 "arbitration_burst": 0, 00:18:38.886 "low_priority_weight": 0, 00:18:38.886 "medium_priority_weight": 0, 00:18:38.886 "high_priority_weight": 0, 00:18:38.886 "nvme_adminq_poll_period_us": 10000, 00:18:38.886 "nvme_ioq_poll_period_us": 0, 00:18:38.886 "io_queue_requests": 0, 00:18:38.886 "delay_cmd_submit": true, 00:18:38.886 "transport_retry_count": 4, 00:18:38.886 "bdev_retry_count": 3, 00:18:38.886 "transport_ack_timeout": 0, 00:18:38.886 "ctrlr_loss_timeout_sec": 0, 00:18:38.886 "reconnect_delay_sec": 0, 00:18:38.886 "fast_io_fail_timeout_sec": 0, 00:18:38.886 "disable_auto_failback": false, 00:18:38.886 "generate_uuids": false, 00:18:38.886 "transport_tos": 0, 00:18:38.886 "nvme_error_stat": false, 00:18:38.886 "rdma_srq_size": 0, 00:18:38.886 "io_path_stat": false, 00:18:38.886 "allow_accel_sequence": false, 00:18:38.886 "rdma_max_cq_size": 0, 00:18:38.886 "rdma_cm_event_timeout_ms": 0, 00:18:38.886 "dhchap_digests": [ 00:18:38.886 "sha256", 00:18:38.886 "sha384", 00:18:38.886 "sha512" 00:18:38.886 ], 00:18:38.886 "dhchap_dhgroups": [ 00:18:38.886 "null", 00:18:38.886 "ffdhe2048", 00:18:38.886 "ffdhe3072", 00:18:38.886 "ffdhe4096", 00:18:38.886 "ffdhe6144", 00:18:38.886 "ffdhe8192" 00:18:38.886 ] 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_nvme_set_hotplug", 00:18:38.887 "params": { 00:18:38.887 "period_us": 100000, 00:18:38.887 "enable": false 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_malloc_create", 00:18:38.887 "params": { 00:18:38.887 "name": "malloc0", 00:18:38.887 "num_blocks": 8192, 00:18:38.887 "block_size": 4096, 00:18:38.887 "physical_block_size": 4096, 00:18:38.887 "uuid": "c7133894-40a6-4196-85b2-91a39897b495", 00:18:38.887 "optimal_io_boundary": 0, 00:18:38.887 "md_size": 0, 00:18:38.887 "dif_type": 0, 00:18:38.887 "dif_is_head_of_md": false, 00:18:38.887 "dif_pi_format": 0 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_wait_for_examine" 00:18:38.887 } 00:18:38.887 ] 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "subsystem": "nbd", 00:18:38.887 "config": [] 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "subsystem": "scheduler", 00:18:38.887 "config": [ 00:18:38.887 { 00:18:38.887 "method": "framework_set_scheduler", 00:18:38.887 "params": { 00:18:38.887 "name": "static" 00:18:38.887 } 00:18:38.887 } 00:18:38.887 ] 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "subsystem": "nvmf", 00:18:38.887 "config": [ 00:18:38.887 { 00:18:38.887 "method": "nvmf_set_config", 00:18:38.887 "params": { 00:18:38.887 "discovery_filter": "match_any", 00:18:38.887 "admin_cmd_passthru": { 00:18:38.887 "identify_ctrlr": false 00:18:38.887 }, 00:18:38.887 "dhchap_digests": [ 00:18:38.887 "sha256", 00:18:38.887 "sha384", 00:18:38.887 "sha512" 00:18:38.887 ], 00:18:38.887 "dhchap_dhgroups": [ 00:18:38.887 "null", 00:18:38.887 "ffdhe2048", 00:18:38.887 "ffdhe3072", 00:18:38.887 "ffdhe4096", 00:18:38.887 "ffdhe6144", 00:18:38.887 "ffdhe8192" 00:18:38.887 ] 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_set_max_subsystems", 00:18:38.887 "params": { 00:18:38.887 "max_subsystems": 1024 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_set_crdt", 00:18:38.887 "params": { 00:18:38.887 "crdt1": 0, 00:18:38.887 "crdt2": 0, 00:18:38.887 "crdt3": 0 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_create_transport", 00:18:38.887 "params": { 00:18:38.887 "trtype": "TCP", 00:18:38.887 "max_queue_depth": 128, 00:18:38.887 "max_io_qpairs_per_ctrlr": 127, 00:18:38.887 "in_capsule_data_size": 4096, 00:18:38.887 "max_io_size": 131072, 00:18:38.887 "io_unit_size": 131072, 00:18:38.887 "max_aq_depth": 128, 00:18:38.887 "num_shared_buffers": 511, 00:18:38.887 "buf_cache_size": 4294967295, 00:18:38.887 "dif_insert_or_strip": false, 00:18:38.887 "zcopy": false, 00:18:38.887 "c2h_success": false, 00:18:38.887 "sock_priority": 0, 00:18:38.887 "abort_timeout_sec": 1, 00:18:38.887 "ack_timeout": 0, 00:18:38.887 "data_wr_pool_size": 0 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_create_subsystem", 00:18:38.887 "params": { 00:18:38.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.887 "allow_any_host": false, 00:18:38.887 "serial_number": "00000000000000000000", 00:18:38.887 "model_number": "SPDK bdev Controller", 00:18:38.887 "max_namespaces": 32, 00:18:38.887 "min_cntlid": 1, 00:18:38.887 "max_cntlid": 65519, 00:18:38.887 "ana_reporting": false 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_subsystem_add_host", 00:18:38.887 "params": { 00:18:38.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.887 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.887 "psk": "key0" 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_subsystem_add_ns", 00:18:38.887 "params": { 00:18:38.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.887 "namespace": { 00:18:38.887 "nsid": 1, 00:18:38.887 "bdev_name": "malloc0", 00:18:38.887 "nguid": "C713389440A6419685B291A39897B495", 00:18:38.887 "uuid": "c7133894-40a6-4196-85b2-91a39897b495", 00:18:38.887 "no_auto_visible": false 00:18:38.887 } 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "nvmf_subsystem_add_listener", 00:18:38.887 "params": { 00:18:38.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.887 "listen_address": { 00:18:38.887 "trtype": "TCP", 00:18:38.887 "adrfam": "IPv4", 00:18:38.887 "traddr": "10.0.0.2", 00:18:38.887 "trsvcid": "4420" 00:18:38.887 }, 00:18:38.887 "secure_channel": false, 00:18:38.887 "sock_impl": "ssl" 00:18:38.887 } 00:18:38.887 } 00:18:38.887 ] 00:18:38.887 } 00:18:38.887 ] 00:18:38.887 }' 00:18:38.887 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:39.146 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:39.146 "subsystems": [ 00:18:39.146 { 00:18:39.146 "subsystem": "keyring", 00:18:39.146 "config": [ 00:18:39.146 { 00:18:39.146 "method": "keyring_file_add_key", 00:18:39.146 "params": { 00:18:39.146 "name": "key0", 00:18:39.146 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:39.146 } 00:18:39.146 } 00:18:39.146 ] 00:18:39.146 }, 00:18:39.146 { 00:18:39.146 "subsystem": "iobuf", 00:18:39.146 "config": [ 00:18:39.146 { 00:18:39.146 "method": "iobuf_set_options", 00:18:39.146 "params": { 00:18:39.146 "small_pool_count": 8192, 00:18:39.146 "large_pool_count": 1024, 00:18:39.146 "small_bufsize": 8192, 00:18:39.146 "large_bufsize": 135168, 00:18:39.146 "enable_numa": false 00:18:39.146 } 00:18:39.146 } 00:18:39.146 ] 00:18:39.146 }, 00:18:39.146 { 00:18:39.146 "subsystem": "sock", 00:18:39.146 "config": [ 00:18:39.146 { 00:18:39.146 "method": "sock_set_default_impl", 00:18:39.146 "params": { 00:18:39.146 "impl_name": "posix" 00:18:39.146 } 00:18:39.146 }, 00:18:39.146 { 00:18:39.146 "method": "sock_impl_set_options", 00:18:39.146 "params": { 00:18:39.146 "impl_name": "ssl", 00:18:39.146 "recv_buf_size": 4096, 00:18:39.146 "send_buf_size": 4096, 00:18:39.146 "enable_recv_pipe": true, 00:18:39.146 "enable_quickack": false, 00:18:39.146 "enable_placement_id": 0, 00:18:39.146 "enable_zerocopy_send_server": true, 00:18:39.146 "enable_zerocopy_send_client": false, 00:18:39.146 "zerocopy_threshold": 0, 00:18:39.146 "tls_version": 0, 00:18:39.146 "enable_ktls": false 00:18:39.146 } 00:18:39.146 }, 00:18:39.146 { 00:18:39.146 "method": "sock_impl_set_options", 00:18:39.146 "params": { 00:18:39.146 "impl_name": "posix", 00:18:39.146 "recv_buf_size": 2097152, 00:18:39.146 "send_buf_size": 2097152, 00:18:39.146 "enable_recv_pipe": true, 00:18:39.146 "enable_quickack": false, 00:18:39.146 "enable_placement_id": 0, 00:18:39.146 "enable_zerocopy_send_server": true, 00:18:39.146 "enable_zerocopy_send_client": false, 00:18:39.146 "zerocopy_threshold": 0, 00:18:39.146 "tls_version": 0, 00:18:39.146 "enable_ktls": false 00:18:39.146 } 00:18:39.146 } 00:18:39.146 ] 00:18:39.146 }, 00:18:39.146 { 00:18:39.147 "subsystem": "vmd", 00:18:39.147 "config": [] 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "subsystem": "accel", 00:18:39.147 "config": [ 00:18:39.147 { 00:18:39.147 "method": "accel_set_options", 00:18:39.147 "params": { 00:18:39.147 "small_cache_size": 128, 00:18:39.147 "large_cache_size": 16, 00:18:39.147 "task_count": 2048, 00:18:39.147 "sequence_count": 2048, 00:18:39.147 "buf_count": 2048 00:18:39.147 } 00:18:39.147 } 00:18:39.147 ] 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "subsystem": "bdev", 00:18:39.147 "config": [ 00:18:39.147 { 00:18:39.147 "method": "bdev_set_options", 00:18:39.147 "params": { 00:18:39.147 "bdev_io_pool_size": 65535, 00:18:39.147 "bdev_io_cache_size": 256, 00:18:39.147 "bdev_auto_examine": true, 00:18:39.147 "iobuf_small_cache_size": 128, 00:18:39.147 "iobuf_large_cache_size": 16 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_raid_set_options", 00:18:39.147 "params": { 00:18:39.147 "process_window_size_kb": 1024, 00:18:39.147 "process_max_bandwidth_mb_sec": 0 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_iscsi_set_options", 00:18:39.147 "params": { 00:18:39.147 "timeout_sec": 30 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_nvme_set_options", 00:18:39.147 "params": { 00:18:39.147 "action_on_timeout": "none", 00:18:39.147 "timeout_us": 0, 00:18:39.147 "timeout_admin_us": 0, 00:18:39.147 "keep_alive_timeout_ms": 10000, 00:18:39.147 "arbitration_burst": 0, 00:18:39.147 "low_priority_weight": 0, 00:18:39.147 "medium_priority_weight": 0, 00:18:39.147 "high_priority_weight": 0, 00:18:39.147 "nvme_adminq_poll_period_us": 10000, 00:18:39.147 "nvme_ioq_poll_period_us": 0, 00:18:39.147 "io_queue_requests": 512, 00:18:39.147 "delay_cmd_submit": true, 00:18:39.147 "transport_retry_count": 4, 00:18:39.147 "bdev_retry_count": 3, 00:18:39.147 "transport_ack_timeout": 0, 00:18:39.147 "ctrlr_loss_timeout_sec": 0, 00:18:39.147 "reconnect_delay_sec": 0, 00:18:39.147 "fast_io_fail_timeout_sec": 0, 00:18:39.147 "disable_auto_failback": false, 00:18:39.147 "generate_uuids": false, 00:18:39.147 "transport_tos": 0, 00:18:39.147 "nvme_error_stat": false, 00:18:39.147 "rdma_srq_size": 0, 00:18:39.147 "io_path_stat": false, 00:18:39.147 "allow_accel_sequence": false, 00:18:39.147 "rdma_max_cq_size": 0, 00:18:39.147 "rdma_cm_event_timeout_ms": 0, 00:18:39.147 "dhchap_digests": [ 00:18:39.147 "sha256", 00:18:39.147 "sha384", 00:18:39.147 "sha512" 00:18:39.147 ], 00:18:39.147 "dhchap_dhgroups": [ 00:18:39.147 "null", 00:18:39.147 "ffdhe2048", 00:18:39.147 "ffdhe3072", 00:18:39.147 "ffdhe4096", 00:18:39.147 "ffdhe6144", 00:18:39.147 "ffdhe8192" 00:18:39.147 ] 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_nvme_attach_controller", 00:18:39.147 "params": { 00:18:39.147 "name": "nvme0", 00:18:39.147 "trtype": "TCP", 00:18:39.147 "adrfam": "IPv4", 00:18:39.147 "traddr": "10.0.0.2", 00:18:39.147 "trsvcid": "4420", 00:18:39.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.147 "prchk_reftag": false, 00:18:39.147 "prchk_guard": false, 00:18:39.147 "ctrlr_loss_timeout_sec": 0, 00:18:39.147 "reconnect_delay_sec": 0, 00:18:39.147 "fast_io_fail_timeout_sec": 0, 00:18:39.147 "psk": "key0", 00:18:39.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.147 "hdgst": false, 00:18:39.147 "ddgst": false, 00:18:39.147 "multipath": "multipath" 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_nvme_set_hotplug", 00:18:39.147 "params": { 00:18:39.147 "period_us": 100000, 00:18:39.147 "enable": false 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_enable_histogram", 00:18:39.147 "params": { 00:18:39.147 "name": "nvme0n1", 00:18:39.147 "enable": true 00:18:39.147 } 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "method": "bdev_wait_for_examine" 00:18:39.147 } 00:18:39.147 ] 00:18:39.147 }, 00:18:39.147 { 00:18:39.147 "subsystem": "nbd", 00:18:39.147 "config": [] 00:18:39.147 } 00:18:39.147 ] 00:18:39.147 }' 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3479653 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3479653 ']' 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3479653 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3479653 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3479653' 00:18:39.147 killing process with pid 3479653 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3479653 00:18:39.147 Received shutdown signal, test time was about 1.000000 seconds 00:18:39.147 00:18:39.147 Latency(us) 00:18:39.147 [2024-11-19T16:36:41.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.147 [2024-11-19T16:36:41.370Z] =================================================================================================================== 00:18:39.147 [2024-11-19T16:36:41.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.147 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3479653 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3479630 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3479630 ']' 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3479630 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3479630 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3479630' 00:18:39.407 killing process with pid 3479630 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3479630 00:18:39.407 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3479630 00:18:39.666 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:39.666 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.666 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.666 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:39.666 "subsystems": [ 00:18:39.666 { 00:18:39.666 "subsystem": "keyring", 00:18:39.666 "config": [ 00:18:39.666 { 00:18:39.666 "method": "keyring_file_add_key", 00:18:39.666 "params": { 00:18:39.666 "name": "key0", 00:18:39.666 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:39.666 } 00:18:39.666 } 00:18:39.666 ] 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "subsystem": "iobuf", 00:18:39.666 "config": [ 00:18:39.666 { 00:18:39.666 "method": "iobuf_set_options", 00:18:39.666 "params": { 00:18:39.666 "small_pool_count": 8192, 00:18:39.666 "large_pool_count": 1024, 00:18:39.666 "small_bufsize": 8192, 00:18:39.666 "large_bufsize": 135168, 00:18:39.666 "enable_numa": false 00:18:39.666 } 00:18:39.666 } 00:18:39.666 ] 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "subsystem": "sock", 00:18:39.666 "config": [ 00:18:39.666 { 00:18:39.666 "method": "sock_set_default_impl", 00:18:39.666 "params": { 00:18:39.666 "impl_name": "posix" 00:18:39.666 } 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "method": "sock_impl_set_options", 00:18:39.666 "params": { 00:18:39.666 "impl_name": "ssl", 00:18:39.666 "recv_buf_size": 4096, 00:18:39.666 "send_buf_size": 4096, 00:18:39.666 "enable_recv_pipe": true, 00:18:39.666 "enable_quickack": false, 00:18:39.666 "enable_placement_id": 0, 00:18:39.666 "enable_zerocopy_send_server": true, 00:18:39.666 "enable_zerocopy_send_client": false, 00:18:39.666 "zerocopy_threshold": 0, 00:18:39.666 "tls_version": 0, 00:18:39.666 "enable_ktls": false 00:18:39.666 } 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "method": "sock_impl_set_options", 00:18:39.666 "params": { 00:18:39.666 "impl_name": "posix", 00:18:39.666 "recv_buf_size": 2097152, 00:18:39.666 "send_buf_size": 2097152, 00:18:39.666 "enable_recv_pipe": true, 00:18:39.666 "enable_quickack": false, 00:18:39.666 "enable_placement_id": 0, 00:18:39.666 "enable_zerocopy_send_server": true, 00:18:39.666 "enable_zerocopy_send_client": false, 00:18:39.666 "zerocopy_threshold": 0, 00:18:39.666 "tls_version": 0, 00:18:39.666 "enable_ktls": false 00:18:39.666 } 00:18:39.666 } 00:18:39.666 ] 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "subsystem": "vmd", 00:18:39.666 "config": [] 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "subsystem": "accel", 00:18:39.666 "config": [ 00:18:39.666 { 00:18:39.666 "method": "accel_set_options", 00:18:39.666 "params": { 00:18:39.666 "small_cache_size": 128, 00:18:39.666 "large_cache_size": 16, 00:18:39.666 "task_count": 2048, 00:18:39.666 "sequence_count": 2048, 00:18:39.666 "buf_count": 2048 00:18:39.666 } 00:18:39.666 } 00:18:39.666 ] 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "subsystem": "bdev", 00:18:39.666 "config": [ 00:18:39.666 { 00:18:39.666 "method": "bdev_set_options", 00:18:39.666 "params": { 00:18:39.666 "bdev_io_pool_size": 65535, 00:18:39.666 "bdev_io_cache_size": 256, 00:18:39.666 "bdev_auto_examine": true, 00:18:39.666 "iobuf_small_cache_size": 128, 00:18:39.666 "iobuf_large_cache_size": 16 00:18:39.666 } 00:18:39.666 }, 00:18:39.666 { 00:18:39.666 "method": "bdev_raid_set_options", 00:18:39.666 "params": { 00:18:39.666 "process_window_size_kb": 1024, 00:18:39.666 "process_max_bandwidth_mb_sec": 0 00:18:39.666 } 00:18:39.666 }, 00:18:39.666 { 00:18:39.667 "method": "bdev_iscsi_set_options", 00:18:39.667 "params": { 00:18:39.667 "timeout_sec": 30 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_nvme_set_options", 00:18:39.667 "params": { 00:18:39.667 "action_on_timeout": "none", 00:18:39.667 "timeout_us": 0, 00:18:39.667 "timeout_admin_us": 0, 00:18:39.667 "keep_alive_timeout_ms": 10000, 00:18:39.667 "arbitration_burst": 0, 00:18:39.667 "low_priority_weight": 0, 00:18:39.667 "medium_priority_weight": 0, 00:18:39.667 "high_priority_weight": 0, 00:18:39.667 "nvme_adminq_poll_period_us": 10000, 00:18:39.667 "nvme_ioq_poll_period_us": 0, 00:18:39.667 "io_queue_requests": 0, 00:18:39.667 "delay_cmd_submit": true, 00:18:39.667 "transport_retry_count": 4, 00:18:39.667 "bdev_retry_count": 3, 00:18:39.667 "transport_ack_timeout": 0, 00:18:39.667 "ctrlr_loss_timeout_sec": 0, 00:18:39.667 "reconnect_delay_sec": 0, 00:18:39.667 "fast_io_fail_timeout_sec": 0, 00:18:39.667 "disable_auto_failback": false, 00:18:39.667 "generate_uuids": false, 00:18:39.667 "transport_tos": 0, 00:18:39.667 "nvme_error_stat": false, 00:18:39.667 "rdma_srq_size": 0, 00:18:39.667 "io_path_stat": false, 00:18:39.667 "allow_accel_sequence": false, 00:18:39.667 "rdma_max_cq_size": 0, 00:18:39.667 "rdma_cm_event_timeout_ms": 0, 00:18:39.667 "dhchap_digests": [ 00:18:39.667 "sha256", 00:18:39.667 "sha384", 00:18:39.667 "sha512" 00:18:39.667 ], 00:18:39.667 "dhchap_dhgroups": [ 00:18:39.667 "null", 00:18:39.667 "ffdhe2048", 00:18:39.667 "ffdhe3072", 00:18:39.667 "ffdhe4096", 00:18:39.667 "ffdhe6144", 00:18:39.667 "ffdhe8192" 00:18:39.667 ] 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_nvme_set_hotplug", 00:18:39.667 "params": { 00:18:39.667 "period_us": 100000, 00:18:39.667 "enable": false 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_malloc_create", 00:18:39.667 "params": { 00:18:39.667 "name": "malloc0", 00:18:39.667 "num_blocks": 8192, 00:18:39.667 "block_size": 4096, 00:18:39.667 "physical_block_size": 4096, 00:18:39.667 "uuid": "c7133894-40a6-4196-85b2-91a39897b495", 00:18:39.667 "optimal_io_boundary": 0, 00:18:39.667 "md_size": 0, 00:18:39.667 "dif_type": 0, 00:18:39.667 "dif_is_head_of_md": false, 00:18:39.667 "dif_pi_format": 0 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_wait_for_examine" 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "nbd", 00:18:39.667 "config": [] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "scheduler", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "framework_set_scheduler", 00:18:39.667 "params": { 00:18:39.667 "name": "static" 00:18:39.667 } 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "nvmf", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "nvmf_set_config", 00:18:39.667 "params": { 00:18:39.667 "discovery_filter": "match_any", 00:18:39.667 "admin_cmd_passthru": { 00:18:39.667 "identify_ctrlr": false 00:18:39.667 }, 00:18:39.667 "dhchap_digests": [ 00:18:39.667 "sha256", 00:18:39.667 "sha384", 00:18:39.667 "sha512" 00:18:39.667 ], 00:18:39.667 "dhchap_dhgroups": [ 00:18:39.667 "null", 00:18:39.667 "ffdhe2048", 00:18:39.667 "ffdhe3072", 00:18:39.667 "ffdhe4096", 00:18:39.667 "ffdhe6144", 00:18:39.667 "ffdhe8192" 00:18:39.667 ] 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_set_max_subsystems", 00:18:39.667 "params": { 00:18:39.667 "max_subsystems": 1024 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_set_crdt", 00:18:39.667 "params": { 00:18:39.667 "crdt1": 0, 00:18:39.667 "crdt2": 0, 00:18:39.667 "crdt3": 0 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_create_transport", 00:18:39.667 "params": { 00:18:39.667 "trtype": "TCP", 00:18:39.667 "max_queue_depth": 128, 00:18:39.667 "max_io_qpairs_per_ctrlr": 127, 00:18:39.667 "in_capsule_data_size": 4096, 00:18:39.667 "max_io_size": 131072, 00:18:39.667 "io_unit_size": 131072, 00:18:39.667 "max_aq_depth": 128, 00:18:39.667 "num_shared_buffers": 511, 00:18:39.667 "buf_cache_size": 4294967295, 00:18:39.667 "dif_insert_or_strip": false, 00:18:39.667 "zcopy": false, 00:18:39.667 "c2h_success": false, 00:18:39.667 "sock_priority": 0, 00:18:39.667 "abort_timeout_sec": 1, 00:18:39.667 "ack_timeout": 0, 00:18:39.667 "data_wr_pool_size": 0 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_create_subsystem", 00:18:39.667 "params": { 00:18:39.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.667 "allow_any_host": false, 00:18:39.667 "serial_number": "00000000000000000000", 00:18:39.667 "model_number": "SPDK bdev Controller", 00:18:39.667 "max_namespaces": 32, 00:18:39.667 "min_cntlid": 1, 00:18:39.667 "max_cntlid": 65519, 00:18:39.667 "ana_reporting": false 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_subsystem_add_host", 00:18:39.667 "params": { 00:18:39.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.667 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.667 "psk": "key0" 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_subsystem_add_ns", 00:18:39.667 "params": { 00:18:39.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.667 "namespace": { 00:18:39.667 "nsid": 1, 00:18:39.667 "bdev_name": "malloc0", 00:18:39.667 "nguid": "C713389440A6419685B291A39897B495", 00:18:39.667 "uuid": "c7133894-40a6-4196-85b2-91a39897b495", 00:18:39.667 "no_auto_visible": false 00:18:39.667 } 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "nvmf_subsystem_add_listener", 00:18:39.667 "params": { 00:18:39.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.667 "listen_address": { 00:18:39.667 "trtype": "TCP", 00:18:39.667 "adrfam": "IPv4", 00:18:39.667 "traddr": "10.0.0.2", 00:18:39.667 "trsvcid": "4420" 00:18:39.667 }, 00:18:39.667 "secure_channel": false, 00:18:39.667 "sock_impl": "ssl" 00:18:39.667 } 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }' 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3480131 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3480131 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3480131 ']' 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.667 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.667 [2024-11-19 17:36:41.749588] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:39.667 [2024-11-19 17:36:41.749638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.667 [2024-11-19 17:36:41.826141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.667 [2024-11-19 17:36:41.866498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.667 [2024-11-19 17:36:41.866538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.667 [2024-11-19 17:36:41.866545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.667 [2024-11-19 17:36:41.866552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.667 [2024-11-19 17:36:41.866557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.667 [2024-11-19 17:36:41.867177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.926 [2024-11-19 17:36:42.079086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.926 [2024-11-19 17:36:42.111109] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.926 [2024-11-19 17:36:42.111320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.493 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.493 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:40.493 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.493 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.493 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3480375 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3480375 /var/tmp/bdevperf.sock 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3480375 ']' 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:40.494 "subsystems": [ 00:18:40.494 { 00:18:40.494 "subsystem": "keyring", 00:18:40.494 "config": [ 00:18:40.494 { 00:18:40.494 "method": "keyring_file_add_key", 00:18:40.494 "params": { 00:18:40.494 "name": "key0", 00:18:40.494 "path": "/tmp/tmp.e8Dh19KBkV" 00:18:40.494 } 00:18:40.494 } 00:18:40.494 ] 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "subsystem": "iobuf", 00:18:40.494 "config": [ 00:18:40.494 { 00:18:40.494 "method": "iobuf_set_options", 00:18:40.494 "params": { 00:18:40.494 "small_pool_count": 8192, 00:18:40.494 "large_pool_count": 1024, 00:18:40.494 "small_bufsize": 8192, 00:18:40.494 "large_bufsize": 135168, 00:18:40.494 "enable_numa": false 00:18:40.494 } 00:18:40.494 } 00:18:40.494 ] 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "subsystem": "sock", 00:18:40.494 "config": [ 00:18:40.494 { 00:18:40.494 "method": "sock_set_default_impl", 00:18:40.494 "params": { 00:18:40.494 "impl_name": "posix" 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "sock_impl_set_options", 00:18:40.494 "params": { 00:18:40.494 "impl_name": "ssl", 00:18:40.494 "recv_buf_size": 4096, 00:18:40.494 "send_buf_size": 4096, 00:18:40.494 "enable_recv_pipe": true, 00:18:40.494 "enable_quickack": false, 00:18:40.494 "enable_placement_id": 0, 00:18:40.494 "enable_zerocopy_send_server": true, 00:18:40.494 "enable_zerocopy_send_client": false, 00:18:40.494 "zerocopy_threshold": 0, 00:18:40.494 "tls_version": 0, 00:18:40.494 "enable_ktls": false 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "sock_impl_set_options", 00:18:40.494 "params": { 00:18:40.494 "impl_name": "posix", 00:18:40.494 "recv_buf_size": 2097152, 00:18:40.494 "send_buf_size": 2097152, 00:18:40.494 "enable_recv_pipe": true, 00:18:40.494 "enable_quickack": false, 00:18:40.494 "enable_placement_id": 0, 00:18:40.494 "enable_zerocopy_send_server": true, 00:18:40.494 "enable_zerocopy_send_client": false, 00:18:40.494 "zerocopy_threshold": 0, 00:18:40.494 "tls_version": 0, 00:18:40.494 "enable_ktls": false 00:18:40.494 } 00:18:40.494 } 00:18:40.494 ] 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "subsystem": "vmd", 00:18:40.494 "config": [] 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "subsystem": "accel", 00:18:40.494 "config": [ 00:18:40.494 { 00:18:40.494 "method": "accel_set_options", 00:18:40.494 "params": { 00:18:40.494 "small_cache_size": 128, 00:18:40.494 "large_cache_size": 16, 00:18:40.494 "task_count": 2048, 00:18:40.494 "sequence_count": 2048, 00:18:40.494 "buf_count": 2048 00:18:40.494 } 00:18:40.494 } 00:18:40.494 ] 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "subsystem": "bdev", 00:18:40.494 "config": [ 00:18:40.494 { 00:18:40.494 "method": "bdev_set_options", 00:18:40.494 "params": { 00:18:40.494 "bdev_io_pool_size": 65535, 00:18:40.494 "bdev_io_cache_size": 256, 00:18:40.494 "bdev_auto_examine": true, 00:18:40.494 "iobuf_small_cache_size": 128, 00:18:40.494 "iobuf_large_cache_size": 16 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_raid_set_options", 00:18:40.494 "params": { 00:18:40.494 "process_window_size_kb": 1024, 00:18:40.494 "process_max_bandwidth_mb_sec": 0 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_iscsi_set_options", 00:18:40.494 "params": { 00:18:40.494 "timeout_sec": 30 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_nvme_set_options", 00:18:40.494 "params": { 00:18:40.494 "action_on_timeout": "none", 00:18:40.494 "timeout_us": 0, 00:18:40.494 "timeout_admin_us": 0, 00:18:40.494 "keep_alive_timeout_ms": 10000, 00:18:40.494 "arbitration_burst": 0, 00:18:40.494 "low_priority_weight": 0, 00:18:40.494 "medium_priority_weight": 0, 00:18:40.494 "high_priority_weight": 0, 00:18:40.494 "nvme_adminq_poll_period_us": 10000, 00:18:40.494 "nvme_ioq_poll_period_us": 0, 00:18:40.494 "io_queue_requests": 512, 00:18:40.494 "delay_cmd_submit": true, 00:18:40.494 "transport_retry_count": 4, 00:18:40.494 "bdev_retry_count": 3, 00:18:40.494 "transport_ack_timeout": 0, 00:18:40.494 "ctrlr_loss_timeout_sec": 0, 00:18:40.494 "reconnect_delay_sec": 0, 00:18:40.494 "fast_io_fail_timeout_sec": 0, 00:18:40.494 "disable_auto_failback": false, 00:18:40.494 "generate_uuids": false, 00:18:40.494 "transport_tos": 0, 00:18:40.494 "nvme_error_stat": false, 00:18:40.494 "rdma_srq_size": 0, 00:18:40.494 "io_path_stat": false, 00:18:40.494 "allow_accel_sequence": false, 00:18:40.494 "rdma_max_cq_size": 0, 00:18:40.494 "rdma_cm_event_timeout_ms": 0, 00:18:40.494 "dhchap_digests": [ 00:18:40.494 "sha256", 00:18:40.494 "sha384", 00:18:40.494 "sha512" 00:18:40.494 ], 00:18:40.494 "dhchap_dhgroups": [ 00:18:40.494 "null", 00:18:40.494 "ffdhe2048", 00:18:40.494 "ffdhe3072", 00:18:40.494 "ffdhe4096", 00:18:40.494 "ffdhe6144", 00:18:40.494 "ffdhe8192" 00:18:40.494 ] 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_nvme_attach_controller", 00:18:40.494 "params": { 00:18:40.494 "name": "nvme0", 00:18:40.494 "trtype": "TCP", 00:18:40.494 "adrfam": "IPv4", 00:18:40.494 "traddr": "10.0.0.2", 00:18:40.494 "trsvcid": "4420", 00:18:40.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.494 "prchk_reftag": false, 00:18:40.494 "prchk_guard": false, 00:18:40.494 "ctrlr_loss_timeout_sec": 0, 00:18:40.494 "reconnect_delay_sec": 0, 00:18:40.494 "fast_io_fail_timeout_sec": 0, 00:18:40.494 "psk": "key0", 00:18:40.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.494 "hdgst": false, 00:18:40.494 "ddgst": false, 00:18:40.494 "multipath": "multipath" 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_nvme_set_hotplug", 00:18:40.494 "params": { 00:18:40.494 "period_us": 100000, 00:18:40.494 "enable": false 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_enable_histogram", 00:18:40.494 "params": { 00:18:40.494 "name": "nvme0n1", 00:18:40.494 "enable": true 00:18:40.494 } 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "method": "bdev_wait_for_examine" 00:18:40.494 } 00:18:40.494 ] 00:18:40.494 }, 00:18:40.494 { 00:18:40.494 "subsystem": "nbd", 00:18:40.494 "config": [] 00:18:40.494 } 00:18:40.494 ] 00:18:40.494 }' 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.494 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.494 [2024-11-19 17:36:42.678159] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:40.495 [2024-11-19 17:36:42.678205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3480375 ] 00:18:40.753 [2024-11-19 17:36:42.754953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.753 [2024-11-19 17:36:42.796597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.753 [2024-11-19 17:36:42.950106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.320 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.320 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.320 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.320 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:41.578 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.578 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.837 Running I/O for 1 seconds... 00:18:42.774 5383.00 IOPS, 21.03 MiB/s 00:18:42.774 Latency(us) 00:18:42.774 [2024-11-19T16:36:44.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.774 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:42.774 Verification LBA range: start 0x0 length 0x2000 00:18:42.774 nvme0n1 : 1.01 5435.32 21.23 0.00 0.00 23372.53 5043.42 22111.28 00:18:42.774 [2024-11-19T16:36:44.997Z] =================================================================================================================== 00:18:42.774 [2024-11-19T16:36:44.997Z] Total : 5435.32 21.23 0.00 0.00 23372.53 5043.42 22111.28 00:18:42.774 { 00:18:42.774 "results": [ 00:18:42.774 { 00:18:42.774 "job": "nvme0n1", 00:18:42.774 "core_mask": "0x2", 00:18:42.774 "workload": "verify", 00:18:42.774 "status": "finished", 00:18:42.774 "verify_range": { 00:18:42.774 "start": 0, 00:18:42.774 "length": 8192 00:18:42.774 }, 00:18:42.774 "queue_depth": 128, 00:18:42.774 "io_size": 4096, 00:18:42.774 "runtime": 1.013923, 00:18:42.774 "iops": 5435.323984168423, 00:18:42.774 "mibps": 21.2317343131579, 00:18:42.774 "io_failed": 0, 00:18:42.774 "io_timeout": 0, 00:18:42.774 "avg_latency_us": 23372.53158505124, 00:18:42.774 "min_latency_us": 5043.422608695652, 00:18:42.774 "max_latency_us": 22111.27652173913 00:18:42.774 } 00:18:42.774 ], 00:18:42.774 "core_count": 1 00:18:42.774 } 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:42.774 nvmf_trace.0 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3480375 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3480375 ']' 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3480375 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3480375 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3480375' 00:18:43.033 killing process with pid 3480375 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3480375 00:18:43.033 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.033 00:18:43.033 Latency(us) 00:18:43.033 [2024-11-19T16:36:45.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.033 [2024-11-19T16:36:45.256Z] =================================================================================================================== 00:18:43.033 [2024-11-19T16:36:45.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3480375 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.033 rmmod nvme_tcp 00:18:43.033 rmmod nvme_fabrics 00:18:43.033 rmmod nvme_keyring 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3480131 ']' 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3480131 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3480131 ']' 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3480131 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.033 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3480131 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3480131' 00:18:43.292 killing process with pid 3480131 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3480131 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3480131 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.292 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zQiaWWHtag /tmp/tmp.KD54sTOC7i /tmp/tmp.e8Dh19KBkV 00:18:45.827 00:18:45.827 real 1m20.441s 00:18:45.827 user 2m3.009s 00:18:45.827 sys 0m30.994s 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.827 ************************************ 00:18:45.827 END TEST nvmf_tls 00:18:45.827 ************************************ 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.827 ************************************ 00:18:45.827 START TEST nvmf_fips 00:18:45.827 ************************************ 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.827 * Looking for test storage... 00:18:45.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.827 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.828 --rc genhtml_branch_coverage=1 00:18:45.828 --rc genhtml_function_coverage=1 00:18:45.828 --rc genhtml_legend=1 00:18:45.828 --rc geninfo_all_blocks=1 00:18:45.828 --rc geninfo_unexecuted_blocks=1 00:18:45.828 00:18:45.828 ' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.828 --rc genhtml_branch_coverage=1 00:18:45.828 --rc genhtml_function_coverage=1 00:18:45.828 --rc genhtml_legend=1 00:18:45.828 --rc geninfo_all_blocks=1 00:18:45.828 --rc geninfo_unexecuted_blocks=1 00:18:45.828 00:18:45.828 ' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.828 --rc genhtml_branch_coverage=1 00:18:45.828 --rc genhtml_function_coverage=1 00:18:45.828 --rc genhtml_legend=1 00:18:45.828 --rc geninfo_all_blocks=1 00:18:45.828 --rc geninfo_unexecuted_blocks=1 00:18:45.828 00:18:45.828 ' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.828 --rc genhtml_branch_coverage=1 00:18:45.828 --rc genhtml_function_coverage=1 00:18:45.828 --rc genhtml_legend=1 00:18:45.828 --rc geninfo_all_blocks=1 00:18:45.828 --rc geninfo_unexecuted_blocks=1 00:18:45.828 00:18:45.828 ' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.828 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:45.829 Error setting digest 00:18:45.829 4042B788C67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:45.829 4042B788C67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.829 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.830 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.830 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.830 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:52.400 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.400 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:52.401 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:52.401 Found net devices under 0000:86:00.0: cvl_0_0 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:52.401 Found net devices under 0000:86:00.1: cvl_0_1 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:18:52.401 00:18:52.401 --- 10.0.0.2 ping statistics --- 00:18:52.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.401 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:18:52.401 00:18:52.401 --- 10.0.0.1 ping statistics --- 00:18:52.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.401 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3484365 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3484365 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3484365 ']' 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.401 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 [2024-11-19 17:36:54.028173] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:52.401 [2024-11-19 17:36:54.028220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.401 [2024-11-19 17:36:54.107895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.401 [2024-11-19 17:36:54.146273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.401 [2024-11-19 17:36:54.146310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.401 [2024-11-19 17:36:54.146317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.401 [2024-11-19 17:36:54.146323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.401 [2024-11-19 17:36:54.146328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.401 [2024-11-19 17:36:54.146891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.660 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.660 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:52.660 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.660 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.660 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.47O 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.47O 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.47O 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.47O 00:18:52.919 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.919 [2024-11-19 17:36:55.064120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.919 [2024-11-19 17:36:55.080131] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.919 [2024-11-19 17:36:55.080334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.919 malloc0 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3484491 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3484491 /var/tmp/bdevperf.sock 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3484491 ']' 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.178 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:53.178 [2024-11-19 17:36:55.211649] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:18:53.178 [2024-11-19 17:36:55.211703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484491 ] 00:18:53.178 [2024-11-19 17:36:55.285769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.178 [2024-11-19 17:36:55.329017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.113 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.113 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:54.113 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.47O 00:18:54.113 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.373 [2024-11-19 17:36:56.410101] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.373 TLSTESTn1 00:18:54.373 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.631 Running I/O for 10 seconds... 00:18:56.501 5470.00 IOPS, 21.37 MiB/s [2024-11-19T16:36:59.677Z] 5504.50 IOPS, 21.50 MiB/s [2024-11-19T16:37:01.052Z] 5536.00 IOPS, 21.62 MiB/s [2024-11-19T16:37:01.619Z] 5506.50 IOPS, 21.51 MiB/s [2024-11-19T16:37:02.996Z] 5512.00 IOPS, 21.53 MiB/s [2024-11-19T16:37:03.932Z] 5518.67 IOPS, 21.56 MiB/s [2024-11-19T16:37:04.867Z] 5471.71 IOPS, 21.37 MiB/s [2024-11-19T16:37:05.803Z] 5477.25 IOPS, 21.40 MiB/s [2024-11-19T16:37:06.739Z] 5482.33 IOPS, 21.42 MiB/s [2024-11-19T16:37:06.739Z] 5477.10 IOPS, 21.39 MiB/s 00:19:04.516 Latency(us) 00:19:04.516 [2024-11-19T16:37:06.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.516 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:04.516 Verification LBA range: start 0x0 length 0x2000 00:19:04.516 TLSTESTn1 : 10.01 5482.37 21.42 0.00 0.00 23313.23 5185.89 23137.06 00:19:04.516 [2024-11-19T16:37:06.739Z] =================================================================================================================== 00:19:04.516 [2024-11-19T16:37:06.739Z] Total : 5482.37 21.42 0.00 0.00 23313.23 5185.89 23137.06 00:19:04.516 { 00:19:04.516 "results": [ 00:19:04.516 { 00:19:04.516 "job": "TLSTESTn1", 00:19:04.516 "core_mask": "0x4", 00:19:04.516 "workload": "verify", 00:19:04.516 "status": "finished", 00:19:04.516 "verify_range": { 00:19:04.516 "start": 0, 00:19:04.516 "length": 8192 00:19:04.516 }, 00:19:04.516 "queue_depth": 128, 00:19:04.516 "io_size": 4096, 00:19:04.516 "runtime": 10.013733, 00:19:04.516 "iops": 5482.371059823544, 00:19:04.516 "mibps": 21.41551195243572, 00:19:04.516 "io_failed": 0, 00:19:04.516 "io_timeout": 0, 00:19:04.516 "avg_latency_us": 23313.23446829237, 00:19:04.516 "min_latency_us": 5185.892173913044, 00:19:04.516 "max_latency_us": 23137.057391304348 00:19:04.516 } 00:19:04.516 ], 00:19:04.516 "core_count": 1 00:19:04.516 } 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:04.516 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:04.516 nvmf_trace.0 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3484491 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3484491 ']' 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3484491 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3484491 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3484491' 00:19:04.776 killing process with pid 3484491 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3484491 00:19:04.776 Received shutdown signal, test time was about 10.000000 seconds 00:19:04.776 00:19:04.776 Latency(us) 00:19:04.776 [2024-11-19T16:37:06.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.776 [2024-11-19T16:37:06.999Z] =================================================================================================================== 00:19:04.776 [2024-11-19T16:37:06.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3484491 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.776 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.776 rmmod nvme_tcp 00:19:04.776 rmmod nvme_fabrics 00:19:05.035 rmmod nvme_keyring 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3484365 ']' 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3484365 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3484365 ']' 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3484365 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3484365 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.035 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3484365' 00:19:05.035 killing process with pid 3484365 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3484365 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3484365 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:05.036 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.295 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.215 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.215 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.47O 00:19:07.216 00:19:07.216 real 0m21.745s 00:19:07.216 user 0m23.616s 00:19:07.216 sys 0m9.577s 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.216 ************************************ 00:19:07.216 END TEST nvmf_fips 00:19:07.216 ************************************ 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.216 ************************************ 00:19:07.216 START TEST nvmf_control_msg_list 00:19:07.216 ************************************ 00:19:07.216 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:07.480 * Looking for test storage... 00:19:07.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.480 --rc genhtml_branch_coverage=1 00:19:07.480 --rc genhtml_function_coverage=1 00:19:07.480 --rc genhtml_legend=1 00:19:07.480 --rc geninfo_all_blocks=1 00:19:07.480 --rc geninfo_unexecuted_blocks=1 00:19:07.480 00:19:07.480 ' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.480 --rc genhtml_branch_coverage=1 00:19:07.480 --rc genhtml_function_coverage=1 00:19:07.480 --rc genhtml_legend=1 00:19:07.480 --rc geninfo_all_blocks=1 00:19:07.480 --rc geninfo_unexecuted_blocks=1 00:19:07.480 00:19:07.480 ' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.480 --rc genhtml_branch_coverage=1 00:19:07.480 --rc genhtml_function_coverage=1 00:19:07.480 --rc genhtml_legend=1 00:19:07.480 --rc geninfo_all_blocks=1 00:19:07.480 --rc geninfo_unexecuted_blocks=1 00:19:07.480 00:19:07.480 ' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.480 --rc genhtml_branch_coverage=1 00:19:07.480 --rc genhtml_function_coverage=1 00:19:07.480 --rc genhtml_legend=1 00:19:07.480 --rc geninfo_all_blocks=1 00:19:07.480 --rc geninfo_unexecuted_blocks=1 00:19:07.480 00:19:07.480 ' 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.480 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:14.055 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:14.055 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:14.055 Found net devices under 0000:86:00.0: cvl_0_0 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:14.055 Found net devices under 0000:86:00.1: cvl_0_1 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.055 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:19:14.056 00:19:14.056 --- 10.0.0.2 ping statistics --- 00:19:14.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.056 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:19:14.056 00:19:14.056 --- 10.0.0.1 ping statistics --- 00:19:14.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.056 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3490035 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3490035 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3490035 ']' 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 [2024-11-19 17:37:15.602235] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:19:14.056 [2024-11-19 17:37:15.602282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.056 [2024-11-19 17:37:15.682652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.056 [2024-11-19 17:37:15.723366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.056 [2024-11-19 17:37:15.723404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.056 [2024-11-19 17:37:15.723412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.056 [2024-11-19 17:37:15.723418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.056 [2024-11-19 17:37:15.723424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.056 [2024-11-19 17:37:15.724006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 [2024-11-19 17:37:15.859267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 Malloc0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 [2024-11-19 17:37:15.895553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3490059 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3490060 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3490061 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.057 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3490059 00:19:14.057 [2024-11-19 17:37:15.973994] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:14.057 [2024-11-19 17:37:15.984211] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:14.057 [2024-11-19 17:37:15.984520] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:14.994 Initializing NVMe Controllers 00:19:14.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:14.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:14.994 Initialization complete. Launching workers. 00:19:14.994 ======================================================== 00:19:14.994 Latency(us) 00:19:14.994 Device Information : IOPS MiB/s Average min max 00:19:14.994 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40909.29 40788.43 41211.62 00:19:14.994 ======================================================== 00:19:14.994 Total : 25.00 0.10 40909.29 40788.43 41211.62 00:19:14.994 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3490060 00:19:14.994 Initializing NVMe Controllers 00:19:14.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:14.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:14.994 Initialization complete. Launching workers. 00:19:14.994 ======================================================== 00:19:14.994 Latency(us) 00:19:14.994 Device Information : IOPS MiB/s Average min max 00:19:14.994 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40906.42 40796.11 41089.62 00:19:14.994 ======================================================== 00:19:14.994 Total : 25.00 0.10 40906.42 40796.11 41089.62 00:19:14.994 00:19:14.994 Initializing NVMe Controllers 00:19:14.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:14.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:14.994 Initialization complete. Launching workers. 00:19:14.994 ======================================================== 00:19:14.994 Latency(us) 00:19:14.994 Device Information : IOPS MiB/s Average min max 00:19:14.994 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6959.99 27.19 143.33 132.80 406.16 00:19:14.994 ======================================================== 00:19:14.994 Total : 6959.99 27.19 143.33 132.80 406.16 00:19:14.994 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3490061 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.994 rmmod nvme_tcp 00:19:14.994 rmmod nvme_fabrics 00:19:14.994 rmmod nvme_keyring 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:14.994 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3490035 ']' 00:19:14.995 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3490035 00:19:14.995 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3490035 ']' 00:19:14.995 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3490035 00:19:14.995 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:14.995 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.995 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3490035 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3490035' 00:19:15.254 killing process with pid 3490035 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3490035 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3490035 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.254 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:17.857 00:19:17.857 real 0m10.072s 00:19:17.857 user 0m6.476s 00:19:17.857 sys 0m5.452s 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:17.857 ************************************ 00:19:17.857 END TEST nvmf_control_msg_list 00:19:17.857 ************************************ 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.857 ************************************ 00:19:17.857 START TEST nvmf_wait_for_buf 00:19:17.857 ************************************ 00:19:17.857 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:17.857 * Looking for test storage... 00:19:17.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.858 --rc genhtml_branch_coverage=1 00:19:17.858 --rc genhtml_function_coverage=1 00:19:17.858 --rc genhtml_legend=1 00:19:17.858 --rc geninfo_all_blocks=1 00:19:17.858 --rc geninfo_unexecuted_blocks=1 00:19:17.858 00:19:17.858 ' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.858 --rc genhtml_branch_coverage=1 00:19:17.858 --rc genhtml_function_coverage=1 00:19:17.858 --rc genhtml_legend=1 00:19:17.858 --rc geninfo_all_blocks=1 00:19:17.858 --rc geninfo_unexecuted_blocks=1 00:19:17.858 00:19:17.858 ' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.858 --rc genhtml_branch_coverage=1 00:19:17.858 --rc genhtml_function_coverage=1 00:19:17.858 --rc genhtml_legend=1 00:19:17.858 --rc geninfo_all_blocks=1 00:19:17.858 --rc geninfo_unexecuted_blocks=1 00:19:17.858 00:19:17.858 ' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.858 --rc genhtml_branch_coverage=1 00:19:17.858 --rc genhtml_function_coverage=1 00:19:17.858 --rc genhtml_legend=1 00:19:17.858 --rc geninfo_all_blocks=1 00:19:17.858 --rc geninfo_unexecuted_blocks=1 00:19:17.858 00:19:17.858 ' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.858 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.859 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:24.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:24.436 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.436 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:24.437 Found net devices under 0000:86:00.0: cvl_0_0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:24.437 Found net devices under 0000:86:00.1: cvl_0_1 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:19:24.437 00:19:24.437 --- 10.0.0.2 ping statistics --- 00:19:24.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.437 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:19:24.437 00:19:24.437 --- 10.0.0.1 ping statistics --- 00:19:24.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.437 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3493819 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3493819 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3493819 ']' 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 [2024-11-19 17:37:25.766068] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:19:24.437 [2024-11-19 17:37:25.766116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.437 [2024-11-19 17:37:25.847550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.437 [2024-11-19 17:37:25.888849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.437 [2024-11-19 17:37:25.888886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.437 [2024-11-19 17:37:25.888894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.437 [2024-11-19 17:37:25.888900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.437 [2024-11-19 17:37:25.888908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.437 [2024-11-19 17:37:25.889482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.437 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:24.438 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.438 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 Malloc0 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 [2024-11-19 17:37:26.063155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 [2024-11-19 17:37:26.091339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.438 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.438 [2024-11-19 17:37:26.173544] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:25.377 Initializing NVMe Controllers 00:19:25.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:25.377 Initialization complete. Launching workers. 00:19:25.377 ======================================================== 00:19:25.377 Latency(us) 00:19:25.377 Device Information : IOPS MiB/s Average min max 00:19:25.377 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.56 15.45 33506.26 30053.30 71069.39 00:19:25.377 ======================================================== 00:19:25.377 Total : 123.56 15.45 33506.26 30053.30 71069.39 00:19:25.377 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.377 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.637 rmmod nvme_tcp 00:19:25.637 rmmod nvme_fabrics 00:19:25.637 rmmod nvme_keyring 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3493819 ']' 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3493819 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3493819 ']' 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3493819 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3493819 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3493819' 00:19:25.637 killing process with pid 3493819 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3493819 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3493819 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.637 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:28.175 00:19:28.175 real 0m10.365s 00:19:28.175 user 0m3.906s 00:19:28.175 sys 0m4.915s 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:28.175 ************************************ 00:19:28.175 END TEST nvmf_wait_for_buf 00:19:28.175 ************************************ 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.175 17:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:33.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:33.453 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:33.453 Found net devices under 0000:86:00.0: cvl_0_0 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:33.453 Found net devices under 0000:86:00.1: cvl_0_1 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.453 ************************************ 00:19:33.453 START TEST nvmf_perf_adq 00:19:33.453 ************************************ 00:19:33.453 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:33.714 * Looking for test storage... 00:19:33.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.714 --rc genhtml_branch_coverage=1 00:19:33.714 --rc genhtml_function_coverage=1 00:19:33.714 --rc genhtml_legend=1 00:19:33.714 --rc geninfo_all_blocks=1 00:19:33.714 --rc geninfo_unexecuted_blocks=1 00:19:33.714 00:19:33.714 ' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.714 --rc genhtml_branch_coverage=1 00:19:33.714 --rc genhtml_function_coverage=1 00:19:33.714 --rc genhtml_legend=1 00:19:33.714 --rc geninfo_all_blocks=1 00:19:33.714 --rc geninfo_unexecuted_blocks=1 00:19:33.714 00:19:33.714 ' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.714 --rc genhtml_branch_coverage=1 00:19:33.714 --rc genhtml_function_coverage=1 00:19:33.714 --rc genhtml_legend=1 00:19:33.714 --rc geninfo_all_blocks=1 00:19:33.714 --rc geninfo_unexecuted_blocks=1 00:19:33.714 00:19:33.714 ' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.714 --rc genhtml_branch_coverage=1 00:19:33.714 --rc genhtml_function_coverage=1 00:19:33.714 --rc genhtml_legend=1 00:19:33.714 --rc geninfo_all_blocks=1 00:19:33.714 --rc geninfo_unexecuted_blocks=1 00:19:33.714 00:19:33.714 ' 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.714 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.715 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.289 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.290 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.290 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:40.290 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:40.859 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:42.763 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.041 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.041 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.041 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.042 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.042 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.042 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:19:48.042 00:19:48.042 --- 10.0.0.2 ping statistics --- 00:19:48.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.042 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:19:48.042 00:19:48.042 --- 10.0.0.1 ping statistics --- 00:19:48.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.042 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3502159 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3502159 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3502159 ']' 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.042 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.042 [2024-11-19 17:37:50.116533] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:19:48.042 [2024-11-19 17:37:50.116577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.042 [2024-11-19 17:37:50.197212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.042 [2024-11-19 17:37:50.242110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.042 [2024-11-19 17:37:50.242147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.042 [2024-11-19 17:37:50.242155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.042 [2024-11-19 17:37:50.242161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.042 [2024-11-19 17:37:50.242166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.042 [2024-11-19 17:37:50.243788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.042 [2024-11-19 17:37:50.243895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.042 [2024-11-19 17:37:50.244003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.042 [2024-11-19 17:37:50.244003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 [2024-11-19 17:37:50.454110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 Malloc1 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 [2024-11-19 17:37:50.510529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3502188 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:48.301 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:50.832 "tick_rate": 2300000000, 00:19:50.832 "poll_groups": [ 00:19:50.832 { 00:19:50.832 "name": "nvmf_tgt_poll_group_000", 00:19:50.832 "admin_qpairs": 1, 00:19:50.832 "io_qpairs": 1, 00:19:50.832 "current_admin_qpairs": 1, 00:19:50.832 "current_io_qpairs": 1, 00:19:50.832 "pending_bdev_io": 0, 00:19:50.832 "completed_nvme_io": 19611, 00:19:50.832 "transports": [ 00:19:50.832 { 00:19:50.832 "trtype": "TCP" 00:19:50.832 } 00:19:50.832 ] 00:19:50.832 }, 00:19:50.832 { 00:19:50.832 "name": "nvmf_tgt_poll_group_001", 00:19:50.832 "admin_qpairs": 0, 00:19:50.832 "io_qpairs": 1, 00:19:50.832 "current_admin_qpairs": 0, 00:19:50.832 "current_io_qpairs": 1, 00:19:50.832 "pending_bdev_io": 0, 00:19:50.832 "completed_nvme_io": 19861, 00:19:50.832 "transports": [ 00:19:50.832 { 00:19:50.832 "trtype": "TCP" 00:19:50.832 } 00:19:50.832 ] 00:19:50.832 }, 00:19:50.832 { 00:19:50.832 "name": "nvmf_tgt_poll_group_002", 00:19:50.832 "admin_qpairs": 0, 00:19:50.832 "io_qpairs": 1, 00:19:50.832 "current_admin_qpairs": 0, 00:19:50.832 "current_io_qpairs": 1, 00:19:50.832 "pending_bdev_io": 0, 00:19:50.832 "completed_nvme_io": 19880, 00:19:50.832 "transports": [ 00:19:50.832 { 00:19:50.832 "trtype": "TCP" 00:19:50.832 } 00:19:50.832 ] 00:19:50.832 }, 00:19:50.832 { 00:19:50.832 "name": "nvmf_tgt_poll_group_003", 00:19:50.832 "admin_qpairs": 0, 00:19:50.832 "io_qpairs": 1, 00:19:50.832 "current_admin_qpairs": 0, 00:19:50.832 "current_io_qpairs": 1, 00:19:50.832 "pending_bdev_io": 0, 00:19:50.832 "completed_nvme_io": 19636, 00:19:50.832 "transports": [ 00:19:50.832 { 00:19:50.832 "trtype": "TCP" 00:19:50.832 } 00:19:50.832 ] 00:19:50.832 } 00:19:50.832 ] 00:19:50.832 }' 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:50.832 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3502188 00:19:58.945 Initializing NVMe Controllers 00:19:58.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:58.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:58.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:58.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:58.945 Initialization complete. Launching workers. 00:19:58.945 ======================================================== 00:19:58.945 Latency(us) 00:19:58.945 Device Information : IOPS MiB/s Average min max 00:19:58.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10435.30 40.76 6132.30 2502.36 10417.63 00:19:58.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10619.50 41.48 6026.88 1933.51 10651.04 00:19:58.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10630.70 41.53 6019.25 2113.07 10436.04 00:19:58.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10497.90 41.01 6096.85 2068.33 10804.47 00:19:58.945 ======================================================== 00:19:58.945 Total : 42183.39 164.78 6068.45 1933.51 10804.47 00:19:58.945 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:58.945 rmmod nvme_tcp 00:19:58.945 rmmod nvme_fabrics 00:19:58.945 rmmod nvme_keyring 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3502159 ']' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3502159 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3502159 ']' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3502159 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502159 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502159' 00:19:58.945 killing process with pid 3502159 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3502159 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3502159 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.945 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.851 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:00.851 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:00.851 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:00.851 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:02.229 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:04.135 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:09.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:09.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:09.412 Found net devices under 0000:86:00.0: cvl_0_0 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.412 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:09.413 Found net devices under 0000:86:00.1: cvl_0_1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:20:09.413 00:20:09.413 --- 10.0.0.2 ping statistics --- 00:20:09.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.413 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:09.413 00:20:09.413 --- 10.0.0.1 ping statistics --- 00:20:09.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.413 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:09.413 net.core.busy_poll = 1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:09.413 net.core.busy_read = 1 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:09.413 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3505968 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3505968 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3505968 ']' 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.673 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.673 [2024-11-19 17:38:11.825167] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:09.673 [2024-11-19 17:38:11.825209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.932 [2024-11-19 17:38:11.905338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.932 [2024-11-19 17:38:11.945394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.932 [2024-11-19 17:38:11.945434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.932 [2024-11-19 17:38:11.945444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.932 [2024-11-19 17:38:11.945449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.932 [2024-11-19 17:38:11.945455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.932 [2024-11-19 17:38:11.946916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.932 [2024-11-19 17:38:11.947024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.932 [2024-11-19 17:38:11.947047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.932 [2024-11-19 17:38:11.947048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.932 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.932 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:09.932 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.932 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.932 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.932 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 [2024-11-19 17:38:12.153057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 Malloc1 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 [2024-11-19 17:38:12.216525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3505996 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:10.191 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:12.094 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:12.094 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.094 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:12.095 "tick_rate": 2300000000, 00:20:12.095 "poll_groups": [ 00:20:12.095 { 00:20:12.095 "name": "nvmf_tgt_poll_group_000", 00:20:12.095 "admin_qpairs": 1, 00:20:12.095 "io_qpairs": 4, 00:20:12.095 "current_admin_qpairs": 1, 00:20:12.095 "current_io_qpairs": 4, 00:20:12.095 "pending_bdev_io": 0, 00:20:12.095 "completed_nvme_io": 43009, 00:20:12.095 "transports": [ 00:20:12.095 { 00:20:12.095 "trtype": "TCP" 00:20:12.095 } 00:20:12.095 ] 00:20:12.095 }, 00:20:12.095 { 00:20:12.095 "name": "nvmf_tgt_poll_group_001", 00:20:12.095 "admin_qpairs": 0, 00:20:12.095 "io_qpairs": 0, 00:20:12.095 "current_admin_qpairs": 0, 00:20:12.095 "current_io_qpairs": 0, 00:20:12.095 "pending_bdev_io": 0, 00:20:12.095 "completed_nvme_io": 0, 00:20:12.095 "transports": [ 00:20:12.095 { 00:20:12.095 "trtype": "TCP" 00:20:12.095 } 00:20:12.095 ] 00:20:12.095 }, 00:20:12.095 { 00:20:12.095 "name": "nvmf_tgt_poll_group_002", 00:20:12.095 "admin_qpairs": 0, 00:20:12.095 "io_qpairs": 0, 00:20:12.095 "current_admin_qpairs": 0, 00:20:12.095 "current_io_qpairs": 0, 00:20:12.095 "pending_bdev_io": 0, 00:20:12.095 "completed_nvme_io": 0, 00:20:12.095 "transports": [ 00:20:12.095 { 00:20:12.095 "trtype": "TCP" 00:20:12.095 } 00:20:12.095 ] 00:20:12.095 }, 00:20:12.095 { 00:20:12.095 "name": "nvmf_tgt_poll_group_003", 00:20:12.095 "admin_qpairs": 0, 00:20:12.095 "io_qpairs": 0, 00:20:12.095 "current_admin_qpairs": 0, 00:20:12.095 "current_io_qpairs": 0, 00:20:12.095 "pending_bdev_io": 0, 00:20:12.095 "completed_nvme_io": 0, 00:20:12.095 "transports": [ 00:20:12.095 { 00:20:12.095 "trtype": "TCP" 00:20:12.095 } 00:20:12.095 ] 00:20:12.095 } 00:20:12.095 ] 00:20:12.095 }' 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:20:12.095 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3505996 00:20:20.208 Initializing NVMe Controllers 00:20:20.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:20.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:20.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:20.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:20.208 Initialization complete. Launching workers. 00:20:20.208 ======================================================== 00:20:20.208 Latency(us) 00:20:20.208 Device Information : IOPS MiB/s Average min max 00:20:20.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5793.90 22.63 11047.47 1087.69 56677.61 00:20:20.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5727.10 22.37 11176.14 1532.35 55894.40 00:20:20.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5471.90 21.37 11739.22 1458.17 55901.90 00:20:20.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5784.20 22.59 11066.82 1514.72 58223.92 00:20:20.208 ======================================================== 00:20:20.208 Total : 22777.09 88.97 11250.92 1087.69 58223.92 00:20:20.208 00:20:20.208 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:20.208 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.208 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:20.208 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.208 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:20.208 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.467 rmmod nvme_tcp 00:20:20.467 rmmod nvme_fabrics 00:20:20.467 rmmod nvme_keyring 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3505968 ']' 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3505968 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3505968 ']' 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3505968 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505968 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505968' 00:20:20.467 killing process with pid 3505968 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3505968 00:20:20.467 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3505968 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.727 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:22.634 00:20:22.634 real 0m49.169s 00:20:22.634 user 2m44.161s 00:20:22.634 sys 0m10.116s 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.634 ************************************ 00:20:22.634 END TEST nvmf_perf_adq 00:20:22.634 ************************************ 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.634 17:38:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.895 ************************************ 00:20:22.895 START TEST nvmf_shutdown 00:20:22.895 ************************************ 00:20:22.895 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:22.895 * Looking for test storage... 00:20:22.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.895 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.895 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.895 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.895 --rc genhtml_branch_coverage=1 00:20:22.895 --rc genhtml_function_coverage=1 00:20:22.895 --rc genhtml_legend=1 00:20:22.895 --rc geninfo_all_blocks=1 00:20:22.895 --rc geninfo_unexecuted_blocks=1 00:20:22.895 00:20:22.895 ' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.895 --rc genhtml_branch_coverage=1 00:20:22.895 --rc genhtml_function_coverage=1 00:20:22.895 --rc genhtml_legend=1 00:20:22.895 --rc geninfo_all_blocks=1 00:20:22.895 --rc geninfo_unexecuted_blocks=1 00:20:22.895 00:20:22.895 ' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.895 --rc genhtml_branch_coverage=1 00:20:22.895 --rc genhtml_function_coverage=1 00:20:22.895 --rc genhtml_legend=1 00:20:22.895 --rc geninfo_all_blocks=1 00:20:22.895 --rc geninfo_unexecuted_blocks=1 00:20:22.895 00:20:22.895 ' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.895 --rc genhtml_branch_coverage=1 00:20:22.895 --rc genhtml_function_coverage=1 00:20:22.895 --rc genhtml_legend=1 00:20:22.895 --rc geninfo_all_blocks=1 00:20:22.895 --rc geninfo_unexecuted_blocks=1 00:20:22.895 00:20:22.895 ' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.895 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:22.896 ************************************ 00:20:22.896 START TEST nvmf_shutdown_tc1 00:20:22.896 ************************************ 00:20:22.896 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.156 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.728 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.729 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.729 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.729 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.729 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:29.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:20:29.729 00:20:29.729 --- 10.0.0.2 ping statistics --- 00:20:29.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.729 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:29.729 00:20:29.729 --- 10.0.0.1 ping statistics --- 00:20:29.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.729 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.729 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3511337 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3511337 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3511337 ']' 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.730 [2024-11-19 17:38:31.187158] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:29.730 [2024-11-19 17:38:31.187201] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.730 [2024-11-19 17:38:31.268610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.730 [2024-11-19 17:38:31.308961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.730 [2024-11-19 17:38:31.309002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.730 [2024-11-19 17:38:31.309009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.730 [2024-11-19 17:38:31.309015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.730 [2024-11-19 17:38:31.309021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.730 [2024-11-19 17:38:31.310635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.730 [2024-11-19 17:38:31.310744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.730 [2024-11-19 17:38:31.310853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.730 [2024-11-19 17:38:31.310854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.730 [2024-11-19 17:38:31.456052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.730 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.730 Malloc1 00:20:29.730 [2024-11-19 17:38:31.559731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.730 Malloc2 00:20:29.730 Malloc3 00:20:29.730 Malloc4 00:20:29.730 Malloc5 00:20:29.730 Malloc6 00:20:29.730 Malloc7 00:20:29.730 Malloc8 00:20:29.730 Malloc9 00:20:29.730 Malloc10 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3511482 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3511482 /var/tmp/bdevperf.sock 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3511482 ']' 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.990 { 00:20:29.990 "params": { 00:20:29.990 "name": "Nvme$subsystem", 00:20:29.990 "trtype": "$TEST_TRANSPORT", 00:20:29.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.990 "adrfam": "ipv4", 00:20:29.990 "trsvcid": "$NVMF_PORT", 00:20:29.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.990 "hdgst": ${hdgst:-false}, 00:20:29.990 "ddgst": ${ddgst:-false} 00:20:29.990 }, 00:20:29.990 "method": "bdev_nvme_attach_controller" 00:20:29.990 } 00:20:29.990 EOF 00:20:29.990 )") 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.990 { 00:20:29.990 "params": { 00:20:29.990 "name": "Nvme$subsystem", 00:20:29.990 "trtype": "$TEST_TRANSPORT", 00:20:29.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.990 "adrfam": "ipv4", 00:20:29.990 "trsvcid": "$NVMF_PORT", 00:20:29.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.990 "hdgst": ${hdgst:-false}, 00:20:29.990 "ddgst": ${ddgst:-false} 00:20:29.990 }, 00:20:29.990 "method": "bdev_nvme_attach_controller" 00:20:29.990 } 00:20:29.990 EOF 00:20:29.990 )") 00:20:29.990 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.990 { 00:20:29.990 "params": { 00:20:29.990 "name": "Nvme$subsystem", 00:20:29.990 "trtype": "$TEST_TRANSPORT", 00:20:29.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.990 "adrfam": "ipv4", 00:20:29.990 "trsvcid": "$NVMF_PORT", 00:20:29.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.990 "hdgst": ${hdgst:-false}, 00:20:29.990 "ddgst": ${ddgst:-false} 00:20:29.990 }, 00:20:29.990 "method": "bdev_nvme_attach_controller" 00:20:29.990 } 00:20:29.990 EOF 00:20:29.990 )") 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.990 { 00:20:29.990 "params": { 00:20:29.990 "name": "Nvme$subsystem", 00:20:29.990 "trtype": "$TEST_TRANSPORT", 00:20:29.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.990 "adrfam": "ipv4", 00:20:29.990 "trsvcid": "$NVMF_PORT", 00:20:29.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.990 "hdgst": ${hdgst:-false}, 00:20:29.990 "ddgst": ${ddgst:-false} 00:20:29.990 }, 00:20:29.990 "method": "bdev_nvme_attach_controller" 00:20:29.990 } 00:20:29.990 EOF 00:20:29.990 )") 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.990 { 00:20:29.990 "params": { 00:20:29.990 "name": "Nvme$subsystem", 00:20:29.990 "trtype": "$TEST_TRANSPORT", 00:20:29.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.990 "adrfam": "ipv4", 00:20:29.990 "trsvcid": "$NVMF_PORT", 00:20:29.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.990 "hdgst": ${hdgst:-false}, 00:20:29.990 "ddgst": ${ddgst:-false} 00:20:29.990 }, 00:20:29.990 "method": "bdev_nvme_attach_controller" 00:20:29.990 } 00:20:29.990 EOF 00:20:29.990 )") 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.990 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.990 { 00:20:29.990 "params": { 00:20:29.990 "name": "Nvme$subsystem", 00:20:29.990 "trtype": "$TEST_TRANSPORT", 00:20:29.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.990 "adrfam": "ipv4", 00:20:29.990 "trsvcid": "$NVMF_PORT", 00:20:29.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.991 "hdgst": ${hdgst:-false}, 00:20:29.991 "ddgst": ${ddgst:-false} 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 } 00:20:29.991 EOF 00:20:29.991 )") 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.991 { 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme$subsystem", 00:20:29.991 "trtype": "$TEST_TRANSPORT", 00:20:29.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "$NVMF_PORT", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.991 "hdgst": ${hdgst:-false}, 00:20:29.991 "ddgst": ${ddgst:-false} 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 } 00:20:29.991 EOF 00:20:29.991 )") 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.991 [2024-11-19 17:38:32.033567] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:29.991 [2024-11-19 17:38:32.033613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.991 { 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme$subsystem", 00:20:29.991 "trtype": "$TEST_TRANSPORT", 00:20:29.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "$NVMF_PORT", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.991 "hdgst": ${hdgst:-false}, 00:20:29.991 "ddgst": ${ddgst:-false} 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 } 00:20:29.991 EOF 00:20:29.991 )") 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.991 { 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme$subsystem", 00:20:29.991 "trtype": "$TEST_TRANSPORT", 00:20:29.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "$NVMF_PORT", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.991 "hdgst": ${hdgst:-false}, 00:20:29.991 "ddgst": ${ddgst:-false} 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 } 00:20:29.991 EOF 00:20:29.991 )") 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.991 { 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme$subsystem", 00:20:29.991 "trtype": "$TEST_TRANSPORT", 00:20:29.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "$NVMF_PORT", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.991 "hdgst": ${hdgst:-false}, 00:20:29.991 "ddgst": ${ddgst:-false} 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 } 00:20:29.991 EOF 00:20:29.991 )") 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:29.991 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme1", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme2", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme3", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme4", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme5", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme6", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme7", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme8", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme9", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 },{ 00:20:29.991 "params": { 00:20:29.991 "name": "Nvme10", 00:20:29.991 "trtype": "tcp", 00:20:29.991 "traddr": "10.0.0.2", 00:20:29.991 "adrfam": "ipv4", 00:20:29.991 "trsvcid": "4420", 00:20:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:29.991 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:29.991 "hdgst": false, 00:20:29.991 "ddgst": false 00:20:29.991 }, 00:20:29.991 "method": "bdev_nvme_attach_controller" 00:20:29.991 }' 00:20:29.991 [2024-11-19 17:38:32.110850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.991 [2024-11-19 17:38:32.152464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3511482 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:31.928 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:32.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3511482 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3511337 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.938 { 00:20:32.938 "params": { 00:20:32.938 "name": "Nvme$subsystem", 00:20:32.938 "trtype": "$TEST_TRANSPORT", 00:20:32.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.938 "adrfam": "ipv4", 00:20:32.938 "trsvcid": "$NVMF_PORT", 00:20:32.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.938 "hdgst": ${hdgst:-false}, 00:20:32.938 "ddgst": ${ddgst:-false} 00:20:32.938 }, 00:20:32.938 "method": "bdev_nvme_attach_controller" 00:20:32.938 } 00:20:32.938 EOF 00:20:32.938 )") 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.938 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.938 { 00:20:32.938 "params": { 00:20:32.938 "name": "Nvme$subsystem", 00:20:32.938 "trtype": "$TEST_TRANSPORT", 00:20:32.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.938 "adrfam": "ipv4", 00:20:32.938 "trsvcid": "$NVMF_PORT", 00:20:32.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 [2024-11-19 17:38:34.974746] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:32.939 [2024-11-19 17:38:34.974795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511980 ] 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.939 { 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme$subsystem", 00:20:32.939 "trtype": "$TEST_TRANSPORT", 00:20:32.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "$NVMF_PORT", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.939 "hdgst": ${hdgst:-false}, 00:20:32.939 "ddgst": ${ddgst:-false} 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 } 00:20:32.939 EOF 00:20:32.939 )") 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:32.939 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme1", 00:20:32.939 "trtype": "tcp", 00:20:32.939 "traddr": "10.0.0.2", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "4420", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.939 "hdgst": false, 00:20:32.939 "ddgst": false 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 },{ 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme2", 00:20:32.939 "trtype": "tcp", 00:20:32.939 "traddr": "10.0.0.2", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "4420", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:32.939 "hdgst": false, 00:20:32.939 "ddgst": false 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 },{ 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme3", 00:20:32.939 "trtype": "tcp", 00:20:32.939 "traddr": "10.0.0.2", 00:20:32.939 "adrfam": "ipv4", 00:20:32.939 "trsvcid": "4420", 00:20:32.939 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:32.939 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:32.939 "hdgst": false, 00:20:32.939 "ddgst": false 00:20:32.939 }, 00:20:32.939 "method": "bdev_nvme_attach_controller" 00:20:32.939 },{ 00:20:32.939 "params": { 00:20:32.939 "name": "Nvme4", 00:20:32.939 "trtype": "tcp", 00:20:32.939 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 },{ 00:20:32.940 "params": { 00:20:32.940 "name": "Nvme5", 00:20:32.940 "trtype": "tcp", 00:20:32.940 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 },{ 00:20:32.940 "params": { 00:20:32.940 "name": "Nvme6", 00:20:32.940 "trtype": "tcp", 00:20:32.940 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 },{ 00:20:32.940 "params": { 00:20:32.940 "name": "Nvme7", 00:20:32.940 "trtype": "tcp", 00:20:32.940 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 },{ 00:20:32.940 "params": { 00:20:32.940 "name": "Nvme8", 00:20:32.940 "trtype": "tcp", 00:20:32.940 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 },{ 00:20:32.940 "params": { 00:20:32.940 "name": "Nvme9", 00:20:32.940 "trtype": "tcp", 00:20:32.940 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 },{ 00:20:32.940 "params": { 00:20:32.940 "name": "Nvme10", 00:20:32.940 "trtype": "tcp", 00:20:32.940 "traddr": "10.0.0.2", 00:20:32.940 "adrfam": "ipv4", 00:20:32.940 "trsvcid": "4420", 00:20:32.940 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:32.940 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:32.940 "hdgst": false, 00:20:32.940 "ddgst": false 00:20:32.940 }, 00:20:32.940 "method": "bdev_nvme_attach_controller" 00:20:32.940 }' 00:20:32.940 [2024-11-19 17:38:35.051706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.940 [2024-11-19 17:38:35.093245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.313 Running I/O for 1 seconds... 00:20:35.247 2212.00 IOPS, 138.25 MiB/s 00:20:35.247 Latency(us) 00:20:35.247 [2024-11-19T16:38:37.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.247 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.247 Verification LBA range: start 0x0 length 0x400 00:20:35.247 Nvme1n1 : 1.14 289.35 18.08 0.00 0.00 214823.04 17210.32 203332.56 00:20:35.247 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.247 Verification LBA range: start 0x0 length 0x400 00:20:35.247 Nvme2n1 : 1.04 248.83 15.55 0.00 0.00 246622.49 20857.54 218833.25 00:20:35.247 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.247 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme3n1 : 1.13 282.70 17.67 0.00 0.00 217943.71 24390.79 217921.45 00:20:35.248 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme4n1 : 1.13 286.91 17.93 0.00 0.00 209430.87 10713.71 222480.47 00:20:35.248 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme5n1 : 1.09 235.63 14.73 0.00 0.00 253314.23 17438.27 226127.69 00:20:35.248 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme6n1 : 1.14 283.50 17.72 0.00 0.00 207892.80 2350.75 225215.89 00:20:35.248 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme7n1 : 1.15 277.98 17.37 0.00 0.00 209199.73 13164.19 227039.50 00:20:35.248 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme8n1 : 1.15 283.39 17.71 0.00 0.00 201701.67 1588.54 217921.45 00:20:35.248 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme9n1 : 1.16 276.65 17.29 0.00 0.00 203669.33 10656.72 228863.11 00:20:35.248 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.248 Verification LBA range: start 0x0 length 0x400 00:20:35.248 Nvme10n1 : 1.15 280.69 17.54 0.00 0.00 197817.85 1645.52 238892.97 00:20:35.248 [2024-11-19T16:38:37.471Z] =================================================================================================================== 00:20:35.248 [2024-11-19T16:38:37.471Z] Total : 2745.63 171.60 0.00 0.00 214818.69 1588.54 238892.97 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.506 rmmod nvme_tcp 00:20:35.506 rmmod nvme_fabrics 00:20:35.506 rmmod nvme_keyring 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3511337 ']' 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3511337 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3511337 ']' 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3511337 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3511337 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3511337' 00:20:35.506 killing process with pid 3511337 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3511337 00:20:35.506 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3511337 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.074 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:37.979 00:20:37.979 real 0m14.983s 00:20:37.979 user 0m32.546s 00:20:37.979 sys 0m5.740s 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.979 ************************************ 00:20:37.979 END TEST nvmf_shutdown_tc1 00:20:37.979 ************************************ 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:37.979 ************************************ 00:20:37.979 START TEST nvmf_shutdown_tc2 00:20:37.979 ************************************ 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:37.979 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.980 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.980 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.980 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.980 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:38.240 Found net devices under 0000:86:00.1: cvl_0_1 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:20:38.240 00:20:38.240 --- 10.0.0.2 ping statistics --- 00:20:38.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.240 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:20:38.240 00:20:38.240 --- 10.0.0.1 ping statistics --- 00:20:38.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.240 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.240 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3513005 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3513005 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3513005 ']' 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.499 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 [2024-11-19 17:38:40.535881] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:38.499 [2024-11-19 17:38:40.535929] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.499 [2024-11-19 17:38:40.614692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.499 [2024-11-19 17:38:40.658383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.499 [2024-11-19 17:38:40.658420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.499 [2024-11-19 17:38:40.658428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.499 [2024-11-19 17:38:40.658434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.499 [2024-11-19 17:38:40.658440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.499 [2024-11-19 17:38:40.660079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.499 [2024-11-19 17:38:40.660184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.500 [2024-11-19 17:38:40.660290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.500 [2024-11-19 17:38:40.660291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.434 [2024-11-19 17:38:41.422696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.434 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.434 Malloc1 00:20:39.434 [2024-11-19 17:38:41.534744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.434 Malloc2 00:20:39.434 Malloc3 00:20:39.434 Malloc4 00:20:39.692 Malloc5 00:20:39.692 Malloc6 00:20:39.692 Malloc7 00:20:39.692 Malloc8 00:20:39.692 Malloc9 00:20:39.692 Malloc10 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3513285 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3513285 /var/tmp/bdevperf.sock 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3513285 ']' 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.951 "adrfam": "ipv4", 00:20:39.951 "trsvcid": "$NVMF_PORT", 00:20:39.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.951 "hdgst": ${hdgst:-false}, 00:20:39.951 "ddgst": ${ddgst:-false} 00:20:39.951 }, 00:20:39.951 "method": "bdev_nvme_attach_controller" 00:20:39.951 } 00:20:39.951 EOF 00:20:39.951 )") 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.951 [2024-11-19 17:38:42.019372] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:39.951 [2024-11-19 17:38:42.019420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513285 ] 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.951 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.951 { 00:20:39.951 "params": { 00:20:39.951 "name": "Nvme$subsystem", 00:20:39.951 "trtype": "$TEST_TRANSPORT", 00:20:39.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "$NVMF_PORT", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.952 "hdgst": ${hdgst:-false}, 00:20:39.952 "ddgst": ${ddgst:-false} 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 } 00:20:39.952 EOF 00:20:39.952 )") 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.952 { 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme$subsystem", 00:20:39.952 "trtype": "$TEST_TRANSPORT", 00:20:39.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "$NVMF_PORT", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.952 "hdgst": ${hdgst:-false}, 00:20:39.952 "ddgst": ${ddgst:-false} 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 } 00:20:39.952 EOF 00:20:39.952 )") 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.952 { 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme$subsystem", 00:20:39.952 "trtype": "$TEST_TRANSPORT", 00:20:39.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "$NVMF_PORT", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.952 "hdgst": ${hdgst:-false}, 00:20:39.952 "ddgst": ${ddgst:-false} 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 } 00:20:39.952 EOF 00:20:39.952 )") 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:39.952 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme1", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme2", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme3", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme4", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme5", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme6", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme7", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme8", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme9", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 },{ 00:20:39.952 "params": { 00:20:39.952 "name": "Nvme10", 00:20:39.952 "trtype": "tcp", 00:20:39.952 "traddr": "10.0.0.2", 00:20:39.952 "adrfam": "ipv4", 00:20:39.952 "trsvcid": "4420", 00:20:39.952 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:39.952 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:39.952 "hdgst": false, 00:20:39.952 "ddgst": false 00:20:39.952 }, 00:20:39.952 "method": "bdev_nvme_attach_controller" 00:20:39.952 }' 00:20:39.952 [2024-11-19 17:38:42.097601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.952 [2024-11-19 17:38:42.139005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.325 Running I/O for 10 seconds... 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:41.893 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3513285 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3513285 ']' 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3513285 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3513285 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3513285' 00:20:42.151 killing process with pid 3513285 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3513285 00:20:42.151 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3513285 00:20:42.410 Received shutdown signal, test time was about 0.958839 seconds 00:20:42.410 00:20:42.410 Latency(us) 00:20:42.410 [2024-11-19T16:38:44.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.410 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme1n1 : 0.95 268.95 16.81 0.00 0.00 234978.17 19261.89 217009.64 00:20:42.410 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme2n1 : 0.94 271.72 16.98 0.00 0.00 229077.26 17324.30 219745.06 00:20:42.410 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme3n1 : 0.96 335.04 20.94 0.00 0.00 181915.20 12024.43 223392.28 00:20:42.410 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme4n1 : 0.94 273.09 17.07 0.00 0.00 219902.22 23592.96 210627.01 00:20:42.410 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme5n1 : 0.94 271.16 16.95 0.00 0.00 217554.14 15842.62 217921.45 00:20:42.410 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme6n1 : 0.95 269.20 16.82 0.00 0.00 215393.95 17666.23 219745.06 00:20:42.410 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme7n1 : 0.93 274.63 17.16 0.00 0.00 206675.92 18122.13 220656.86 00:20:42.410 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme8n1 : 0.93 281.53 17.60 0.00 0.00 197118.54 4530.53 217921.45 00:20:42.410 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme9n1 : 0.96 267.18 16.70 0.00 0.00 205328.92 17210.32 249834.63 00:20:42.410 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.410 Verification LBA range: start 0x0 length 0x400 00:20:42.410 Nvme10n1 : 0.91 209.87 13.12 0.00 0.00 254139.29 19489.84 235245.75 00:20:42.410 [2024-11-19T16:38:44.633Z] =================================================================================================================== 00:20:42.410 [2024-11-19T16:38:44.633Z] Total : 2722.36 170.15 0.00 0.00 214369.07 4530.53 249834.63 00:20:42.410 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3513005 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.785 rmmod nvme_tcp 00:20:43.785 rmmod nvme_fabrics 00:20:43.785 rmmod nvme_keyring 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3513005 ']' 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3513005 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3513005 ']' 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3513005 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3513005 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3513005' 00:20:43.785 killing process with pid 3513005 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3513005 00:20:43.785 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3513005 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.044 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.581 00:20:46.581 real 0m8.012s 00:20:46.581 user 0m24.366s 00:20:46.581 sys 0m1.399s 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 ************************************ 00:20:46.581 END TEST nvmf_shutdown_tc2 00:20:46.581 ************************************ 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 ************************************ 00:20:46.581 START TEST nvmf_shutdown_tc3 00:20:46.581 ************************************ 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:46.581 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:46.581 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:46.581 Found net devices under 0000:86:00.0: cvl_0_0 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.581 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:46.582 Found net devices under 0000:86:00.1: cvl_0_1 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:20:46.582 00:20:46.582 --- 10.0.0.2 ping statistics --- 00:20:46.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.582 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:46.582 00:20:46.582 --- 10.0.0.1 ping statistics --- 00:20:46.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.582 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3514546 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3514546 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3514546 ']' 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.582 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.582 [2024-11-19 17:38:48.606687] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:46.582 [2024-11-19 17:38:48.606732] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.582 [2024-11-19 17:38:48.682929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.582 [2024-11-19 17:38:48.722812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.582 [2024-11-19 17:38:48.722851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.582 [2024-11-19 17:38:48.722858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.582 [2024-11-19 17:38:48.722864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.582 [2024-11-19 17:38:48.722869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.582 [2024-11-19 17:38:48.724365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.582 [2024-11-19 17:38:48.724476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.582 [2024-11-19 17:38:48.724582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.582 [2024-11-19 17:38:48.724583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.841 [2024-11-19 17:38:48.873579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.841 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.841 Malloc1 00:20:46.841 [2024-11-19 17:38:48.985600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.841 Malloc2 00:20:46.841 Malloc3 00:20:47.100 Malloc4 00:20:47.100 Malloc5 00:20:47.100 Malloc6 00:20:47.100 Malloc7 00:20:47.100 Malloc8 00:20:47.100 Malloc9 00:20:47.358 Malloc10 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3514611 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3514611 /var/tmp/bdevperf.sock 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3514611 ']' 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.358 { 00:20:47.358 "params": { 00:20:47.358 "name": "Nvme$subsystem", 00:20:47.358 "trtype": "$TEST_TRANSPORT", 00:20:47.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.358 "adrfam": "ipv4", 00:20:47.358 "trsvcid": "$NVMF_PORT", 00:20:47.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.358 "hdgst": ${hdgst:-false}, 00:20:47.358 "ddgst": ${ddgst:-false} 00:20:47.358 }, 00:20:47.358 "method": "bdev_nvme_attach_controller" 00:20:47.358 } 00:20:47.358 EOF 00:20:47.358 )") 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.358 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.358 { 00:20:47.358 "params": { 00:20:47.358 "name": "Nvme$subsystem", 00:20:47.358 "trtype": "$TEST_TRANSPORT", 00:20:47.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.358 "adrfam": "ipv4", 00:20:47.358 "trsvcid": "$NVMF_PORT", 00:20:47.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 [2024-11-19 17:38:49.458573] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:47.359 [2024-11-19 17:38:49.458621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514611 ] 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.359 "adrfam": "ipv4", 00:20:47.359 "trsvcid": "$NVMF_PORT", 00:20:47.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.359 "hdgst": ${hdgst:-false}, 00:20:47.359 "ddgst": ${ddgst:-false} 00:20:47.359 }, 00:20:47.359 "method": "bdev_nvme_attach_controller" 00:20:47.359 } 00:20:47.359 EOF 00:20:47.359 )") 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.359 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.359 { 00:20:47.359 "params": { 00:20:47.359 "name": "Nvme$subsystem", 00:20:47.359 "trtype": "$TEST_TRANSPORT", 00:20:47.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "$NVMF_PORT", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.360 "hdgst": ${hdgst:-false}, 00:20:47.360 "ddgst": ${ddgst:-false} 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 } 00:20:47.360 EOF 00:20:47.360 )") 00:20:47.360 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:47.360 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:47.360 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:47.360 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme1", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme2", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme3", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme4", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme5", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme6", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme7", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme8", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme9", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 },{ 00:20:47.360 "params": { 00:20:47.360 "name": "Nvme10", 00:20:47.360 "trtype": "tcp", 00:20:47.360 "traddr": "10.0.0.2", 00:20:47.360 "adrfam": "ipv4", 00:20:47.360 "trsvcid": "4420", 00:20:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.360 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.360 "hdgst": false, 00:20:47.360 "ddgst": false 00:20:47.360 }, 00:20:47.360 "method": "bdev_nvme_attach_controller" 00:20:47.360 }' 00:20:47.360 [2024-11-19 17:38:49.534089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.360 [2024-11-19 17:38:49.575966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.260 Running I/O for 10 seconds... 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:49.260 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.518 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.519 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:49.519 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:49.519 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3514546 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3514546 ']' 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3514546 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.777 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3514546 00:20:50.056 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.056 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.056 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3514546' 00:20:50.056 killing process with pid 3514546 00:20:50.056 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3514546 00:20:50.056 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3514546 00:20:50.056 [2024-11-19 17:38:52.042123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.056 [2024-11-19 17:38:52.042440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.042571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f700 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.044780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237fbf0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.057 [2024-11-19 17:38:52.046976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.046982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.046989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23805b0 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with [2024-11-19 17:38:52.047740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1the state(6) to be set 00:20:50.058 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.047802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with [2024-11-19 17:38:52.047865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1the state(6) to be set 00:20:50.058 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 [2024-11-19 17:38:52.047911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with [2024-11-19 17:38:52.047919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(6) to be set 00:20:50.058 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.047931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.058 the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.058 [2024-11-19 17:38:52.047940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.058 [2024-11-19 17:38:52.047946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.047955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.047959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.047965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.047970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.047973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.047977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.047983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.047985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.047991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.047992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380930 is same with the state(6) to be set 00:20:50.059 [2024-11-19 17:38:52.048062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.059 [2024-11-19 17:38:52.048439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.059 [2024-11-19 17:38:52.048447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.060 [2024-11-19 17:38:52.048801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.060 [2024-11-19 17:38:52.048933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.048946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.048985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.048992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.048999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.049006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc19b0 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.049045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.049052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.049059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.049066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.049073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.049072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.060 [2024-11-19 17:38:52.049086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.060 [2024-11-19 17:38:52.049093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb948c0 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.060 [2024-11-19 17:38:52.049131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769640 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with [2024-11-19 17:38:52.049241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:20:50.061 id:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with [2024-11-19 17:38:52.049251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:50.061 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fa90 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 17:38:52.049339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with [2024-11-19 17:38:52.049347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:20:50.061 id:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with [2024-11-19 17:38:52.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:50.061 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.061 [2024-11-19 17:38:52.049387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.061 [2024-11-19 17:38:52.049395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773d50 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.061 [2024-11-19 17:38:52.049423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-19 17:38:52.049434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:50.062 the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.062 [2024-11-19 17:38:52.049450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.062 [2024-11-19 17:38:52.049457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.062 [2024-11-19 17:38:52.049464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.062 [2024-11-19 17:38:52.049471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.062 [2024-11-19 17:38:52.049479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-19 17:38:52.049485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:50.062 the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 17:38:52.049496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.062 the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7741b0 is same w[2024-11-19 17:38:52.049505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with ith the state(6) to be set 00:20:50.062 the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.049551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e00 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.050998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.051006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.051012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.062 [2024-11-19 17:38:52.051018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23812d0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.051997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23817c0 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:50.063 [2024-11-19 17:38:52.052485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc19b0 (9): Bad file descriptor 00:20:50.063 [2024-11-19 17:38:52.052877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.063 [2024-11-19 17:38:52.052918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.052995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:12[2024-11-19 17:38:52.053118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.053154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:12[2024-11-19 17:38:52.053184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.053193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with [2024-11-19 17:38:52.053206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:12the state(6) to be set 00:20:50.064 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.053236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 17:38:52.053292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.064 [2024-11-19 17:38:52.053325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7290 is same with the state(6) to be set 00:20:50.064 [2024-11-19 17:38:52.053329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.064 [2024-11-19 17:38:52.053338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.065 [2024-11-19 17:38:52.053868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.065 [2024-11-19 17:38:52.053875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.053986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.053993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.054741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.054772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.055925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.055967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.056005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.056037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.056078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.056111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.056147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.056179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.066 [2024-11-19 17:38:52.056223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.066 [2024-11-19 17:38:52.056258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.056296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.056329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.056369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49d60 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.057803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:50.067 [2024-11-19 17:38:52.057831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9fa90 (9): Bad file descriptor 00:20:50.067 [2024-11-19 17:38:52.058024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.067 [2024-11-19 17:38:52.058040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc19b0 with addr=10.0.0.2, port=4420 00:20:50.067 [2024-11-19 17:38:52.058048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc19b0 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.059148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:50.067 [2024-11-19 17:38:52.059173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x769640 (9): Bad file descriptor 00:20:50.067 [2024-11-19 17:38:52.059192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc19b0 (9): Bad file descriptor 00:20:50.067 [2024-11-19 17:38:52.059221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd1a60 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.059302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb948c0 (9): Bad file descriptor 00:20:50.067 [2024-11-19 17:38:52.059329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc460 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.059424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x688610 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.059498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773d50 (9): Bad file descriptor 00:20:50.067 [2024-11-19 17:38:52.059523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.067 [2024-11-19 17:38:52.059578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.059585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94370 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.059598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7741b0 (9): Bad file descriptor 00:20:50.067 [2024-11-19 17:38:52.059652] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.067 [2024-11-19 17:38:52.059699] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.067 [2024-11-19 17:38:52.059990] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.067 [2024-11-19 17:38:52.060043] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.067 [2024-11-19 17:38:52.060091] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.067 [2024-11-19 17:38:52.060288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.067 [2024-11-19 17:38:52.060306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9fa90 with addr=10.0.0.2, port=4420 00:20:50.067 [2024-11-19 17:38:52.060314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fa90 is same with the state(6) to be set 00:20:50.067 [2024-11-19 17:38:52.060332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:50.067 [2024-11-19 17:38:52.060340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:50.067 [2024-11-19 17:38:52.060348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:50.067 [2024-11-19 17:38:52.060358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:50.067 [2024-11-19 17:38:52.060400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.067 [2024-11-19 17:38:52.060562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.067 [2024-11-19 17:38:52.060570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.060699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.060707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.070988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.070997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.068 [2024-11-19 17:38:52.071288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.068 [2024-11-19 17:38:52.071296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.071305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.071312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.071320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.071327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.071336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.071343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.071352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.071359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.071368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.071375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.071382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979600 is same with the state(6) to be set 00:20:50.069 [2024-11-19 17:38:52.071779] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.069 [2024-11-19 17:38:52.071832] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:50.069 [2024-11-19 17:38:52.072079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.069 [2024-11-19 17:38:52.072095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x769640 with addr=10.0.0.2, port=4420 00:20:50.069 [2024-11-19 17:38:52.072104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769640 is same with the state(6) to be set 00:20:50.069 [2024-11-19 17:38:52.072116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9fa90 (9): Bad file descriptor 00:20:50.069 [2024-11-19 17:38:52.072143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd1a60 (9): Bad file descriptor 00:20:50.069 [2024-11-19 17:38:52.072169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcc460 (9): Bad file descriptor 00:20:50.069 [2024-11-19 17:38:52.072185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x688610 (9): Bad file descriptor 00:20:50.069 [2024-11-19 17:38:52.072203] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:50.069 [2024-11-19 17:38:52.072217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94370 (9): Bad file descriptor 00:20:50.069 [2024-11-19 17:38:52.072238] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:50.069 [2024-11-19 17:38:52.072248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x769640 (9): Bad file descriptor 00:20:50.069 [2024-11-19 17:38:52.073224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.069 [2024-11-19 17:38:52.073684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.069 [2024-11-19 17:38:52.073691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.073983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.073990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.070 [2024-11-19 17:38:52.074268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.070 [2024-11-19 17:38:52.074276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6070 is same with the state(6) to be set 00:20:50.071 [2024-11-19 17:38:52.074389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:50.071 [2024-11-19 17:38:52.074413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:50.071 [2024-11-19 17:38:52.074422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:50.071 [2024-11-19 17:38:52.074430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:50.071 [2024-11-19 17:38:52.074438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:50.071 [2024-11-19 17:38:52.074482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.074985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.074997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.071 [2024-11-19 17:38:52.075275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.071 [2024-11-19 17:38:52.075287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.075909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.075919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978450 is same with the state(6) to be set 00:20:50.072 [2024-11-19 17:38:52.077295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.072 [2024-11-19 17:38:52.077536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.072 [2024-11-19 17:38:52.077546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.077984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.077995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.073 [2024-11-19 17:38:52.078418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.073 [2024-11-19 17:38:52.078432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.078726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.078738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb78560 is same with the state(6) to be set 00:20:50.074 [2024-11-19 17:38:52.081335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:50.074 [2024-11-19 17:38:52.081365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:50.074 [2024-11-19 17:38:52.081379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:50.074 [2024-11-19 17:38:52.081653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.074 [2024-11-19 17:38:52.081673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773d50 with addr=10.0.0.2, port=4420 00:20:50.074 [2024-11-19 17:38:52.081686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773d50 is same with the state(6) to be set 00:20:50.074 [2024-11-19 17:38:52.081697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:50.074 [2024-11-19 17:38:52.081706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:50.074 [2024-11-19 17:38:52.081723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:50.074 [2024-11-19 17:38:52.081734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:50.074 [2024-11-19 17:38:52.082110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:50.074 [2024-11-19 17:38:52.082297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.074 [2024-11-19 17:38:52.082315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc19b0 with addr=10.0.0.2, port=4420 00:20:50.074 [2024-11-19 17:38:52.082326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc19b0 is same with the state(6) to be set 00:20:50.074 [2024-11-19 17:38:52.082478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.074 [2024-11-19 17:38:52.082494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7741b0 with addr=10.0.0.2, port=4420 00:20:50.074 [2024-11-19 17:38:52.082505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7741b0 is same with the state(6) to be set 00:20:50.074 [2024-11-19 17:38:52.082669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.074 [2024-11-19 17:38:52.082684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb948c0 with addr=10.0.0.2, port=4420 00:20:50.074 [2024-11-19 17:38:52.082694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb948c0 is same with the state(6) to be set 00:20:50.074 [2024-11-19 17:38:52.082707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773d50 (9): Bad file descriptor 00:20:50.074 [2024-11-19 17:38:52.083858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.074 [2024-11-19 17:38:52.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd1a60 with addr=10.0.0.2, port=4420 00:20:50.074 [2024-11-19 17:38:52.083893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd1a60 is same with the state(6) to be set 00:20:50.074 [2024-11-19 17:38:52.083906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc19b0 (9): Bad file descriptor 00:20:50.074 [2024-11-19 17:38:52.083919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7741b0 (9): Bad file descriptor 00:20:50.074 [2024-11-19 17:38:52.083932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb948c0 (9): Bad file descriptor 00:20:50.074 [2024-11-19 17:38:52.083944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:50.074 [2024-11-19 17:38:52.083959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:50.074 [2024-11-19 17:38:52.083970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:50.074 [2024-11-19 17:38:52.083980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:50.074 [2024-11-19 17:38:52.084056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.074 [2024-11-19 17:38:52.084215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.074 [2024-11-19 17:38:52.084227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.084976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.084990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.085003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.085012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.085025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.085037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.085059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.085071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.085081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.085094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.085104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.075 [2024-11-19 17:38:52.085116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.075 [2024-11-19 17:38:52.085125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.085506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.085514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79a90 is same with the state(6) to be set 00:20:50.076 [2024-11-19 17:38:52.086559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.076 [2024-11-19 17:38:52.086934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.076 [2024-11-19 17:38:52.086942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.086956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.086964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.086973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.086981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.086990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.086998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.077 [2024-11-19 17:38:52.087612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.077 [2024-11-19 17:38:52.087622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.087629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.087638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.087645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.087655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.087662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.087673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7afc0 is same with the state(6) to be set 00:20:50.078 [2024-11-19 17:38:52.088715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.088990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.088998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.078 [2024-11-19 17:38:52.089336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.078 [2024-11-19 17:38:52.089346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.079 [2024-11-19 17:38:52.089812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.079 [2024-11-19 17:38:52.089820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac30c0 is same with the state(6) to be set 00:20:50.079 [2024-11-19 17:38:52.090833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:50.079 [2024-11-19 17:38:52.090851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:50.079 [2024-11-19 17:38:52.090862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:50.079 [2024-11-19 17:38:52.090874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:50.079 task offset: 27648 on job bdev=Nvme10n1 fails 00:20:50.079 00:20:50.079 Latency(us) 00:20:50.079 [2024-11-19T16:38:52.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.079 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.079 Job: Nvme1n1 ended in about 0.97 seconds with error 00:20:50.079 Verification LBA range: start 0x0 length 0x400 00:20:50.079 Nvme1n1 : 0.97 197.54 12.35 65.85 0.00 240612.40 31229.33 207891.59 00:20:50.079 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.079 Job: Nvme2n1 ended in about 0.97 seconds with error 00:20:50.079 Verification LBA range: start 0x0 length 0x400 00:20:50.079 Nvme2n1 : 0.97 217.93 13.62 56.81 0.00 226212.96 20971.52 195126.32 00:20:50.079 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.079 Job: Nvme3n1 ended in about 0.95 seconds with error 00:20:50.079 Verification LBA range: start 0x0 length 0x400 00:20:50.079 Nvme3n1 : 0.95 268.36 16.77 22.01 0.00 209994.17 16184.54 217009.64 00:20:50.079 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.079 Job: Nvme4n1 ended in about 0.95 seconds with error 00:20:50.079 Verification LBA range: start 0x0 length 0x400 00:20:50.079 Nvme4n1 : 0.95 268.79 16.80 67.20 0.00 178935.72 5328.36 215186.03 00:20:50.080 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.080 Job: Nvme5n1 ended in about 0.97 seconds with error 00:20:50.080 Verification LBA range: start 0x0 length 0x400 00:20:50.080 Nvme5n1 : 0.97 196.97 12.31 65.66 0.00 225354.57 16868.40 204244.37 00:20:50.080 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.080 Job: Nvme6n1 ended in about 0.98 seconds with error 00:20:50.080 Verification LBA range: start 0x0 length 0x400 00:20:50.080 Nvme6n1 : 0.98 195.63 12.23 65.21 0.00 223013.62 18008.15 226127.69 00:20:50.080 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.080 Job: Nvme7n1 ended in about 0.98 seconds with error 00:20:50.080 Verification LBA range: start 0x0 length 0x400 00:20:50.080 Nvme7n1 : 0.98 195.20 12.20 65.07 0.00 219576.32 13449.13 209715.20 00:20:50.080 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.080 Job: Nvme8n1 ended in about 0.99 seconds with error 00:20:50.080 Verification LBA range: start 0x0 length 0x400 00:20:50.080 Nvme8n1 : 0.99 194.78 12.17 64.93 0.00 216140.35 14588.88 223392.28 00:20:50.080 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.080 Job: Nvme9n1 ended in about 0.98 seconds with error 00:20:50.080 Verification LBA range: start 0x0 length 0x400 00:20:50.080 Nvme9n1 : 0.98 196.71 12.29 65.57 0.00 209763.51 17552.25 232510.33 00:20:50.080 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.080 Job: Nvme10n1 ended in about 0.95 seconds with error 00:20:50.080 Verification LBA range: start 0x0 length 0x400 00:20:50.080 Nvme10n1 : 0.95 202.63 12.66 67.54 0.00 198688.28 4074.63 238892.97 00:20:50.080 [2024-11-19T16:38:52.303Z] =================================================================================================================== 00:20:50.080 [2024-11-19T16:38:52.303Z] Total : 2134.55 133.41 605.84 0.00 213968.60 4074.63 238892.97 00:20:50.080 [2024-11-19 17:38:52.121720] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:50.080 [2024-11-19 17:38:52.121771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:50.080 [2024-11-19 17:38:52.121840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd1a60 (9): Bad file descriptor 00:20:50.080 [2024-11-19 17:38:52.121854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.121861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.121871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.121880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.121889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.121896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.121903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.121910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.121917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.121925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.121932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.121939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.122344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.122365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9fa90 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.122377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fa90 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.122601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.122613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x769640 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.122622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769640 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.122759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.122771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94370 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.122779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94370 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.122957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.122969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x688610 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.122983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x688610 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.123128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.123140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcc460 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.123148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcc460 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.123156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.123163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.123171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.123179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.123225] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:50.080 [2024-11-19 17:38:52.124169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:50.080 [2024-11-19 17:38:52.124229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9fa90 (9): Bad file descriptor 00:20:50.080 [2024-11-19 17:38:52.124244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x769640 (9): Bad file descriptor 00:20:50.080 [2024-11-19 17:38:52.124254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94370 (9): Bad file descriptor 00:20:50.080 [2024-11-19 17:38:52.124263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x688610 (9): Bad file descriptor 00:20:50.080 [2024-11-19 17:38:52.124273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcc460 (9): Bad file descriptor 00:20:50.080 [2024-11-19 17:38:52.124324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:50.080 [2024-11-19 17:38:52.124335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:50.080 [2024-11-19 17:38:52.124344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:50.080 [2024-11-19 17:38:52.124354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:50.080 [2024-11-19 17:38:52.124634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.124650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773d50 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.124659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773d50 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.124667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.124682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.124690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.124698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.124706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.124712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.124724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.124730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.124737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.124744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.124752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.124758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.124765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.124772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.124779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.124786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.124793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:50.080 [2024-11-19 17:38:52.124799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:50.080 [2024-11-19 17:38:52.124806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:50.080 [2024-11-19 17:38:52.124813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:50.080 [2024-11-19 17:38:52.125075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.080 [2024-11-19 17:38:52.125089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb948c0 with addr=10.0.0.2, port=4420 00:20:50.080 [2024-11-19 17:38:52.125097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb948c0 is same with the state(6) to be set 00:20:50.080 [2024-11-19 17:38:52.125239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.081 [2024-11-19 17:38:52.125251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7741b0 with addr=10.0.0.2, port=4420 00:20:50.081 [2024-11-19 17:38:52.125260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7741b0 is same with the state(6) to be set 00:20:50.081 [2024-11-19 17:38:52.125477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.081 [2024-11-19 17:38:52.125489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc19b0 with addr=10.0.0.2, port=4420 00:20:50.081 [2024-11-19 17:38:52.125498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc19b0 is same with the state(6) to be set 00:20:50.081 [2024-11-19 17:38:52.125640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.081 [2024-11-19 17:38:52.125652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd1a60 with addr=10.0.0.2, port=4420 00:20:50.081 [2024-11-19 17:38:52.125660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd1a60 is same with the state(6) to be set 00:20:50.081 [2024-11-19 17:38:52.125671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773d50 (9): Bad file descriptor 00:20:50.081 [2024-11-19 17:38:52.125698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb948c0 (9): Bad file descriptor 00:20:50.081 [2024-11-19 17:38:52.125709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7741b0 (9): Bad file descriptor 00:20:50.081 [2024-11-19 17:38:52.125719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc19b0 (9): Bad file descriptor 00:20:50.081 [2024-11-19 17:38:52.125731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd1a60 (9): Bad file descriptor 00:20:50.081 [2024-11-19 17:38:52.125739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:50.081 [2024-11-19 17:38:52.125746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:50.081 [2024-11-19 17:38:52.125753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:50.081 [2024-11-19 17:38:52.125760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:50.081 [2024-11-19 17:38:52.125785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:50.081 [2024-11-19 17:38:52.125792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:50.081 [2024-11-19 17:38:52.125800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:50.081 [2024-11-19 17:38:52.125807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:50.081 [2024-11-19 17:38:52.125814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:50.081 [2024-11-19 17:38:52.125821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:50.081 [2024-11-19 17:38:52.125827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:50.081 [2024-11-19 17:38:52.125834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:50.081 [2024-11-19 17:38:52.125841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:50.081 [2024-11-19 17:38:52.125847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:50.081 [2024-11-19 17:38:52.125854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:50.081 [2024-11-19 17:38:52.125861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:50.081 [2024-11-19 17:38:52.125868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:50.081 [2024-11-19 17:38:52.125874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:50.081 [2024-11-19 17:38:52.125882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:50.081 [2024-11-19 17:38:52.125888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:50.339 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3514611 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3514611 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3514611 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.285 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.285 rmmod nvme_tcp 00:20:51.285 rmmod nvme_fabrics 00:20:51.285 rmmod nvme_keyring 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3514546 ']' 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3514546 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3514546 ']' 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3514546 00:20:51.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3514546) - No such process 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3514546 is not found' 00:20:51.544 Process with pid 3514546 is not found 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.544 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.450 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.450 00:20:53.450 real 0m7.337s 00:20:53.450 user 0m17.439s 00:20:53.450 sys 0m1.376s 00:20:53.450 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.451 ************************************ 00:20:53.451 END TEST nvmf_shutdown_tc3 00:20:53.451 ************************************ 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:53.451 ************************************ 00:20:53.451 START TEST nvmf_shutdown_tc4 00:20:53.451 ************************************ 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.451 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:53.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:53.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:53.711 Found net devices under 0000:86:00.0: cvl_0_0 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:53.711 Found net devices under 0000:86:00.1: cvl_0_1 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:53.711 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:53.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:20:53.712 00:20:53.712 --- 10.0.0.2 ping statistics --- 00:20:53.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.712 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:53.712 00:20:53.712 --- 10.0.0.1 ping statistics --- 00:20:53.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.712 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.712 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3515870 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3515870 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3515870 ']' 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.971 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:53.971 [2024-11-19 17:38:56.029178] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:20:53.971 [2024-11-19 17:38:56.029225] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.971 [2024-11-19 17:38:56.109708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.971 [2024-11-19 17:38:56.151602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.971 [2024-11-19 17:38:56.151642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.971 [2024-11-19 17:38:56.151649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.971 [2024-11-19 17:38:56.151658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.971 [2024-11-19 17:38:56.151663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.971 [2024-11-19 17:38:56.153285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.971 [2024-11-19 17:38:56.153395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.971 [2024-11-19 17:38:56.153501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.971 [2024-11-19 17:38:56.153501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:54.909 [2024-11-19 17:38:56.919774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.909 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:54.909 Malloc1 00:20:54.909 [2024-11-19 17:38:57.024822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.909 Malloc2 00:20:54.909 Malloc3 00:20:55.168 Malloc4 00:20:55.168 Malloc5 00:20:55.169 Malloc6 00:20:55.169 Malloc7 00:20:55.169 Malloc8 00:20:55.169 Malloc9 00:20:55.427 Malloc10 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3516149 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:55.427 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:55.427 [2024-11-19 17:38:57.537460] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3515870 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3515870 ']' 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3515870 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3515870 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3515870' 00:21:00.708 killing process with pid 3515870 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3515870 00:21:00.708 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3515870 00:21:00.708 [2024-11-19 17:39:02.528724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84450 is same with the state(6) to be set 00:21:00.708 [2024-11-19 17:39:02.528773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84450 is same with the state(6) to be set 00:21:00.708 [2024-11-19 17:39:02.528781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84450 is same with the state(6) to be set 00:21:00.708 [2024-11-19 17:39:02.528788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84450 is same with the state(6) to be set 00:21:00.708 [2024-11-19 17:39:02.528797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84450 is same with the state(6) to be set 00:21:00.708 [2024-11-19 17:39:02.528804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84450 is same with the state(6) to be set 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 [2024-11-19 17:39:02.530771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 Write completed with error (sct=0, sc=8) 00:21:00.708 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 [2024-11-19 17:39:02.531744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 [2024-11-19 17:39:02.532785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.709 starting I/O failed: -6 00:21:00.709 [2024-11-19 17:39:02.534328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.709 NVMe io qpair process completion error 00:21:00.709 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 [2024-11-19 17:39:02.535487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.710 starting I/O failed: -6 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 [2024-11-19 17:39:02.536429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 [2024-11-19 17:39:02.537475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.710 starting I/O failed: -6 00:21:00.710 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 [2024-11-19 17:39:02.539190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.711 NVMe io qpair process completion error 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 [2024-11-19 17:39:02.540188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.711 starting I/O failed: -6 00:21:00.711 starting I/O failed: -6 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 starting I/O failed: -6 00:21:00.711 Write completed with error (sct=0, sc=8) 00:21:00.711 [2024-11-19 17:39:02.541102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 [2024-11-19 17:39:02.542133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.712 [2024-11-19 17:39:02.543902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.712 NVMe io qpair process completion error 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 Write completed with error (sct=0, sc=8) 00:21:00.712 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 [2024-11-19 17:39:02.544935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 [2024-11-19 17:39:02.545815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.713 starting I/O failed: -6 00:21:00.713 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 [2024-11-19 17:39:02.546869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 [2024-11-19 17:39:02.548423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.714 NVMe io qpair process completion error 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 [2024-11-19 17:39:02.549393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.714 starting I/O failed: -6 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.714 Write completed with error (sct=0, sc=8) 00:21:00.714 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 [2024-11-19 17:39:02.550305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 [2024-11-19 17:39:02.551380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.715 starting I/O failed: -6 00:21:00.715 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 [2024-11-19 17:39:02.557402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.716 NVMe io qpair process completion error 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 [2024-11-19 17:39:02.558308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 [2024-11-19 17:39:02.559210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.716 starting I/O failed: -6 00:21:00.716 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 [2024-11-19 17:39:02.560466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 [2024-11-19 17:39:02.563754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.717 NVMe io qpair process completion error 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 [2024-11-19 17:39:02.564897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 starting I/O failed: -6 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.717 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 [2024-11-19 17:39:02.565783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 [2024-11-19 17:39:02.566851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.718 Write completed with error (sct=0, sc=8) 00:21:00.718 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 [2024-11-19 17:39:02.568831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.719 NVMe io qpair process completion error 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 [2024-11-19 17:39:02.569797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 [2024-11-19 17:39:02.570735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.719 starting I/O failed: -6 00:21:00.719 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 [2024-11-19 17:39:02.571749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 [2024-11-19 17:39:02.576997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.720 NVMe io qpair process completion error 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.720 Write completed with error (sct=0, sc=8) 00:21:00.720 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.721 Write completed with error (sct=0, sc=8) 00:21:00.721 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 [2024-11-19 17:39:02.583999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 [2024-11-19 17:39:02.584894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 starting I/O failed: -6 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.722 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 [2024-11-19 17:39:02.585951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 Write completed with error (sct=0, sc=8) 00:21:00.723 starting I/O failed: -6 00:21:00.723 [2024-11-19 17:39:02.588354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.723 NVMe io qpair process completion error 00:21:00.723 Initializing NVMe Controllers 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:00.723 Controller IO queue size 128, less than required. 00:21:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:00.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:00.724 Initialization complete. Launching workers. 00:21:00.724 ======================================================== 00:21:00.724 Latency(us) 00:21:00.724 Device Information : IOPS MiB/s Average min max 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2163.91 92.98 59157.33 734.49 102338.21 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2159.63 92.80 59286.90 686.13 109199.97 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2151.50 92.45 59578.13 649.34 107041.57 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2155.56 92.62 59499.99 915.11 104679.26 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2160.06 92.81 59392.30 702.52 128017.77 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2152.57 92.49 59649.86 667.66 125082.22 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2087.51 89.70 61544.35 698.69 104131.02 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2101.21 90.29 60403.12 711.12 99686.95 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2132.88 91.65 59517.19 706.89 99701.08 00:21:00.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2114.90 90.87 60036.07 716.98 99765.03 00:21:00.724 ======================================================== 00:21:00.724 Total : 21379.71 918.66 59798.97 649.34 128017.77 00:21:00.724 00:21:00.724 [2024-11-19 17:39:02.591354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d740 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6cbc0 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d410 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c890 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6da70 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e900 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6cef0 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e720 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6eae0 is same with the state(6) to be set 00:21:00.724 [2024-11-19 17:39:02.591642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c560 is same with the state(6) to be set 00:21:00.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:00.724 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3516149 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3516149 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3516149 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:02.105 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.106 rmmod nvme_tcp 00:21:02.106 rmmod nvme_fabrics 00:21:02.106 rmmod nvme_keyring 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3515870 ']' 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3515870 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3515870 ']' 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3515870 00:21:02.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3515870) - No such process 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3515870 is not found' 00:21:02.106 Process with pid 3515870 is not found 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.106 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.106 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.106 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.106 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.106 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.106 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.014 00:21:04.014 real 0m10.400s 00:21:04.014 user 0m27.671s 00:21:04.014 sys 0m5.094s 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:04.014 ************************************ 00:21:04.014 END TEST nvmf_shutdown_tc4 00:21:04.014 ************************************ 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:04.014 00:21:04.014 real 0m41.229s 00:21:04.014 user 1m42.245s 00:21:04.014 sys 0m13.919s 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:04.014 ************************************ 00:21:04.014 END TEST nvmf_shutdown 00:21:04.014 ************************************ 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.014 ************************************ 00:21:04.014 START TEST nvmf_nsid 00:21:04.014 ************************************ 00:21:04.014 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:04.275 * Looking for test storage... 00:21:04.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.275 --rc genhtml_branch_coverage=1 00:21:04.275 --rc genhtml_function_coverage=1 00:21:04.275 --rc genhtml_legend=1 00:21:04.275 --rc geninfo_all_blocks=1 00:21:04.275 --rc geninfo_unexecuted_blocks=1 00:21:04.275 00:21:04.275 ' 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.275 --rc genhtml_branch_coverage=1 00:21:04.275 --rc genhtml_function_coverage=1 00:21:04.275 --rc genhtml_legend=1 00:21:04.275 --rc geninfo_all_blocks=1 00:21:04.275 --rc geninfo_unexecuted_blocks=1 00:21:04.275 00:21:04.275 ' 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.275 --rc genhtml_branch_coverage=1 00:21:04.275 --rc genhtml_function_coverage=1 00:21:04.275 --rc genhtml_legend=1 00:21:04.275 --rc geninfo_all_blocks=1 00:21:04.275 --rc geninfo_unexecuted_blocks=1 00:21:04.275 00:21:04.275 ' 00:21:04.275 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.275 --rc genhtml_branch_coverage=1 00:21:04.275 --rc genhtml_function_coverage=1 00:21:04.275 --rc genhtml_legend=1 00:21:04.275 --rc geninfo_all_blocks=1 00:21:04.275 --rc geninfo_unexecuted_blocks=1 00:21:04.275 00:21:04.275 ' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.276 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.855 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.855 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.855 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:21:10.856 00:21:10.856 --- 10.0.0.2 ping statistics --- 00:21:10.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.856 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:21:10.856 00:21:10.856 --- 10.0.0.1 ping statistics --- 00:21:10.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.856 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3521131 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3521131 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3521131 ']' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.856 [2024-11-19 17:39:12.369428] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:10.856 [2024-11-19 17:39:12.369472] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.856 [2024-11-19 17:39:12.449184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.856 [2024-11-19 17:39:12.490803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.856 [2024-11-19 17:39:12.490841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.856 [2024-11-19 17:39:12.490848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.856 [2024-11-19 17:39:12.490854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.856 [2024-11-19 17:39:12.490859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.856 [2024-11-19 17:39:12.491439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3521266 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d21eebc6-83ed-401a-a10f-7a8b60107dc6 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=98c23b73-4f97-435a-b0e6-3c74fe9eff13 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b82c6def-3fbb-4e6a-80cf-b9656bd806d2 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.856 null0 00:21:10.856 null1 00:21:10.856 [2024-11-19 17:39:12.672714] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:10.856 [2024-11-19 17:39:12.672761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3521266 ] 00:21:10.856 null2 00:21:10.856 [2024-11-19 17:39:12.679726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.856 [2024-11-19 17:39:12.703913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3521266 /var/tmp/tgt2.sock 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3521266 ']' 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:10.856 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.857 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:10.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:10.857 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.857 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.857 [2024-11-19 17:39:12.749018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.857 [2024-11-19 17:39:12.790145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.857 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.857 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:10.857 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:11.116 [2024-11-19 17:39:13.317897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.116 [2024-11-19 17:39:13.334012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:11.375 nvme0n1 nvme0n2 00:21:11.375 nvme1n1 00:21:11.375 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:11.375 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:11.375 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:12.312 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:12.313 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:12.313 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:13.250 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:13.250 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d21eebc6-83ed-401a-a10f-7a8b60107dc6 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d21eebc683ed401aa10f7a8b60107dc6 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D21EEBC683ED401AA10F7A8B60107DC6 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D21EEBC683ED401AA10F7A8B60107DC6 == \D\2\1\E\E\B\C\6\8\3\E\D\4\0\1\A\A\1\0\F\7\A\8\B\6\0\1\0\7\D\C\6 ]] 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 98c23b73-4f97-435a-b0e6-3c74fe9eff13 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=98c23b734f97435ab0e63c74fe9eff13 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 98C23B734F97435AB0E63C74FE9EFF13 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 98C23B734F97435AB0E63C74FE9EFF13 == \9\8\C\2\3\B\7\3\4\F\9\7\4\3\5\A\B\0\E\6\3\C\7\4\F\E\9\E\F\F\1\3 ]] 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b82c6def-3fbb-4e6a-80cf-b9656bd806d2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b82c6def3fbb4e6a80cfb9656bd806d2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B82C6DEF3FBB4E6A80CFB9656BD806D2 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B82C6DEF3FBB4E6A80CFB9656BD806D2 == \B\8\2\C\6\D\E\F\3\F\B\B\4\E\6\A\8\0\C\F\B\9\6\5\6\B\D\8\0\6\D\2 ]] 00:21:13.509 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3521266 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3521266 ']' 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3521266 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3521266 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3521266' 00:21:13.768 killing process with pid 3521266 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3521266 00:21:13.768 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3521266 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.027 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.027 rmmod nvme_tcp 00:21:14.027 rmmod nvme_fabrics 00:21:14.287 rmmod nvme_keyring 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3521131 ']' 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3521131 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3521131 ']' 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3521131 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3521131 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3521131' 00:21:14.287 killing process with pid 3521131 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3521131 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3521131 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.287 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.825 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.825 00:21:16.825 real 0m12.377s 00:21:16.825 user 0m9.682s 00:21:16.825 sys 0m5.494s 00:21:16.825 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.825 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:16.825 ************************************ 00:21:16.825 END TEST nvmf_nsid 00:21:16.825 ************************************ 00:21:16.825 17:39:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:16.825 00:21:16.825 real 12m0.785s 00:21:16.825 user 25m45.669s 00:21:16.825 sys 3m44.654s 00:21:16.825 17:39:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.825 17:39:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.825 ************************************ 00:21:16.825 END TEST nvmf_target_extra 00:21:16.825 ************************************ 00:21:16.825 17:39:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:16.825 17:39:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.825 17:39:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.825 17:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.825 ************************************ 00:21:16.825 START TEST nvmf_host 00:21:16.825 ************************************ 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:16.825 * Looking for test storage... 00:21:16.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.825 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:16.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.826 --rc genhtml_branch_coverage=1 00:21:16.826 --rc genhtml_function_coverage=1 00:21:16.826 --rc genhtml_legend=1 00:21:16.826 --rc geninfo_all_blocks=1 00:21:16.826 --rc geninfo_unexecuted_blocks=1 00:21:16.826 00:21:16.826 ' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:16.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.826 --rc genhtml_branch_coverage=1 00:21:16.826 --rc genhtml_function_coverage=1 00:21:16.826 --rc genhtml_legend=1 00:21:16.826 --rc geninfo_all_blocks=1 00:21:16.826 --rc geninfo_unexecuted_blocks=1 00:21:16.826 00:21:16.826 ' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:16.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.826 --rc genhtml_branch_coverage=1 00:21:16.826 --rc genhtml_function_coverage=1 00:21:16.826 --rc genhtml_legend=1 00:21:16.826 --rc geninfo_all_blocks=1 00:21:16.826 --rc geninfo_unexecuted_blocks=1 00:21:16.826 00:21:16.826 ' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:16.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.826 --rc genhtml_branch_coverage=1 00:21:16.826 --rc genhtml_function_coverage=1 00:21:16.826 --rc genhtml_legend=1 00:21:16.826 --rc geninfo_all_blocks=1 00:21:16.826 --rc geninfo_unexecuted_blocks=1 00:21:16.826 00:21:16.826 ' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.826 ************************************ 00:21:16.826 START TEST nvmf_multicontroller 00:21:16.826 ************************************ 00:21:16.826 17:39:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:16.826 * Looking for test storage... 00:21:16.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.826 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.826 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.826 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:17.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.087 --rc genhtml_branch_coverage=1 00:21:17.087 --rc genhtml_function_coverage=1 00:21:17.087 --rc genhtml_legend=1 00:21:17.087 --rc geninfo_all_blocks=1 00:21:17.087 --rc geninfo_unexecuted_blocks=1 00:21:17.087 00:21:17.087 ' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:17.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.087 --rc genhtml_branch_coverage=1 00:21:17.087 --rc genhtml_function_coverage=1 00:21:17.087 --rc genhtml_legend=1 00:21:17.087 --rc geninfo_all_blocks=1 00:21:17.087 --rc geninfo_unexecuted_blocks=1 00:21:17.087 00:21:17.087 ' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:17.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.087 --rc genhtml_branch_coverage=1 00:21:17.087 --rc genhtml_function_coverage=1 00:21:17.087 --rc genhtml_legend=1 00:21:17.087 --rc geninfo_all_blocks=1 00:21:17.087 --rc geninfo_unexecuted_blocks=1 00:21:17.087 00:21:17.087 ' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:17.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.087 --rc genhtml_branch_coverage=1 00:21:17.087 --rc genhtml_function_coverage=1 00:21:17.087 --rc genhtml_legend=1 00:21:17.087 --rc geninfo_all_blocks=1 00:21:17.087 --rc geninfo_unexecuted_blocks=1 00:21:17.087 00:21:17.087 ' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.087 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.088 17:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:23.663 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:23.663 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:23.663 Found net devices under 0000:86:00.0: cvl_0_0 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:23.663 Found net devices under 0000:86:00.1: cvl_0_1 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.663 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:21:23.664 00:21:23.664 --- 10.0.0.2 ping statistics --- 00:21:23.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.664 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:21:23.664 17:39:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:21:23.664 00:21:23.664 --- 10.0.0.1 ping statistics --- 00:21:23.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.664 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3525461 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3525461 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3525461 ']' 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 [2024-11-19 17:39:25.099587] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:23.664 [2024-11-19 17:39:25.099632] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.664 [2024-11-19 17:39:25.180560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.664 [2024-11-19 17:39:25.222805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.664 [2024-11-19 17:39:25.222841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.664 [2024-11-19 17:39:25.222848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.664 [2024-11-19 17:39:25.222854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.664 [2024-11-19 17:39:25.222860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.664 [2024-11-19 17:39:25.224271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.664 [2024-11-19 17:39:25.224378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.664 [2024-11-19 17:39:25.224379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 [2024-11-19 17:39:25.359590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 Malloc0 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 [2024-11-19 17:39:25.422033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 [2024-11-19 17:39:25.429974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 Malloc1 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:23.664 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3525548 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3525548 /var/tmp/bdevperf.sock 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3525548 ']' 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 NVMe0n1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.665 1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 request: 00:21:23.665 { 00:21:23.665 "name": "NVMe0", 00:21:23.665 "trtype": "tcp", 00:21:23.665 "traddr": "10.0.0.2", 00:21:23.665 "adrfam": "ipv4", 00:21:23.665 "trsvcid": "4420", 00:21:23.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.665 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:23.665 "hostaddr": "10.0.0.1", 00:21:23.665 "prchk_reftag": false, 00:21:23.665 "prchk_guard": false, 00:21:23.665 "hdgst": false, 00:21:23.665 "ddgst": false, 00:21:23.665 "allow_unrecognized_csi": false, 00:21:23.665 "method": "bdev_nvme_attach_controller", 00:21:23.665 "req_id": 1 00:21:23.665 } 00:21:23.665 Got JSON-RPC error response 00:21:23.665 response: 00:21:23.665 { 00:21:23.665 "code": -114, 00:21:23.665 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:23.665 } 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 request: 00:21:23.665 { 00:21:23.665 "name": "NVMe0", 00:21:23.665 "trtype": "tcp", 00:21:23.665 "traddr": "10.0.0.2", 00:21:23.665 "adrfam": "ipv4", 00:21:23.665 "trsvcid": "4420", 00:21:23.665 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.665 "hostaddr": "10.0.0.1", 00:21:23.665 "prchk_reftag": false, 00:21:23.665 "prchk_guard": false, 00:21:23.665 "hdgst": false, 00:21:23.665 "ddgst": false, 00:21:23.665 "allow_unrecognized_csi": false, 00:21:23.665 "method": "bdev_nvme_attach_controller", 00:21:23.665 "req_id": 1 00:21:23.665 } 00:21:23.665 Got JSON-RPC error response 00:21:23.665 response: 00:21:23.665 { 00:21:23.665 "code": -114, 00:21:23.665 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:23.665 } 00:21:23.665 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.925 request: 00:21:23.925 { 00:21:23.925 "name": "NVMe0", 00:21:23.925 "trtype": "tcp", 00:21:23.925 "traddr": "10.0.0.2", 00:21:23.925 "adrfam": "ipv4", 00:21:23.925 "trsvcid": "4420", 00:21:23.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.925 "hostaddr": "10.0.0.1", 00:21:23.925 "prchk_reftag": false, 00:21:23.925 "prchk_guard": false, 00:21:23.925 "hdgst": false, 00:21:23.925 "ddgst": false, 00:21:23.925 "multipath": "disable", 00:21:23.925 "allow_unrecognized_csi": false, 00:21:23.925 "method": "bdev_nvme_attach_controller", 00:21:23.925 "req_id": 1 00:21:23.925 } 00:21:23.925 Got JSON-RPC error response 00:21:23.925 response: 00:21:23.925 { 00:21:23.925 "code": -114, 00:21:23.925 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:23.925 } 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:23.925 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.926 request: 00:21:23.926 { 00:21:23.926 "name": "NVMe0", 00:21:23.926 "trtype": "tcp", 00:21:23.926 "traddr": "10.0.0.2", 00:21:23.926 "adrfam": "ipv4", 00:21:23.926 "trsvcid": "4420", 00:21:23.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.926 "hostaddr": "10.0.0.1", 00:21:23.926 "prchk_reftag": false, 00:21:23.926 "prchk_guard": false, 00:21:23.926 "hdgst": false, 00:21:23.926 "ddgst": false, 00:21:23.926 "multipath": "failover", 00:21:23.926 "allow_unrecognized_csi": false, 00:21:23.926 "method": "bdev_nvme_attach_controller", 00:21:23.926 "req_id": 1 00:21:23.926 } 00:21:23.926 Got JSON-RPC error response 00:21:23.926 response: 00:21:23.926 { 00:21:23.926 "code": -114, 00:21:23.926 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:23.926 } 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.926 17:39:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:24.186 NVMe0n1 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:24.186 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:24.186 17:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.244 { 00:21:25.244 "results": [ 00:21:25.244 { 00:21:25.244 "job": "NVMe0n1", 00:21:25.244 "core_mask": "0x1", 00:21:25.244 "workload": "write", 00:21:25.244 "status": "finished", 00:21:25.244 "queue_depth": 128, 00:21:25.244 "io_size": 4096, 00:21:25.244 "runtime": 1.005576, 00:21:25.244 "iops": 23941.502183823002, 00:21:25.244 "mibps": 93.5214929055586, 00:21:25.244 "io_failed": 0, 00:21:25.244 "io_timeout": 0, 00:21:25.244 "avg_latency_us": 5334.527727229221, 00:21:25.244 "min_latency_us": 3077.342608695652, 00:21:25.244 "max_latency_us": 12366.358260869565 00:21:25.244 } 00:21:25.244 ], 00:21:25.244 "core_count": 1 00:21:25.244 } 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3525548 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3525548 ']' 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3525548 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.244 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525548 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525548' 00:21:25.503 killing process with pid 3525548 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3525548 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3525548 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:25.503 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:25.503 [2024-11-19 17:39:25.533234] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:25.503 [2024-11-19 17:39:25.533282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525548 ] 00:21:25.503 [2024-11-19 17:39:25.607219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.503 [2024-11-19 17:39:25.648712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.503 [2024-11-19 17:39:26.276267] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 71988a05-ea92-47f6-a886-2c1309a656e9 already exists 00:21:25.503 [2024-11-19 17:39:26.276294] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:71988a05-ea92-47f6-a886-2c1309a656e9 alias for bdev NVMe1n1 00:21:25.503 [2024-11-19 17:39:26.276302] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:25.503 Running I/O for 1 seconds... 00:21:25.503 23884.00 IOPS, 93.30 MiB/s 00:21:25.503 Latency(us) 00:21:25.503 [2024-11-19T16:39:27.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.503 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:25.503 NVMe0n1 : 1.01 23941.50 93.52 0.00 0.00 5334.53 3077.34 12366.36 00:21:25.503 [2024-11-19T16:39:27.726Z] =================================================================================================================== 00:21:25.503 [2024-11-19T16:39:27.726Z] Total : 23941.50 93.52 0.00 0.00 5334.53 3077.34 12366.36 00:21:25.503 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.503 00:21:25.503 Latency(us) 00:21:25.503 [2024-11-19T16:39:27.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.503 [2024-11-19T16:39:27.726Z] =================================================================================================================== 00:21:25.503 [2024-11-19T16:39:27.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.503 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.503 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.503 rmmod nvme_tcp 00:21:25.503 rmmod nvme_fabrics 00:21:25.762 rmmod nvme_keyring 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3525461 ']' 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3525461 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3525461 ']' 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3525461 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525461 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525461' 00:21:25.762 killing process with pid 3525461 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3525461 00:21:25.762 17:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3525461 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.022 17:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.928 00:21:27.928 real 0m11.166s 00:21:27.928 user 0m12.377s 00:21:27.928 sys 0m5.155s 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.928 ************************************ 00:21:27.928 END TEST nvmf_multicontroller 00:21:27.928 ************************************ 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.928 17:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.188 ************************************ 00:21:28.188 START TEST nvmf_aer 00:21:28.188 ************************************ 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:28.188 * Looking for test storage... 00:21:28.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:28.188 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:28.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.189 --rc genhtml_branch_coverage=1 00:21:28.189 --rc genhtml_function_coverage=1 00:21:28.189 --rc genhtml_legend=1 00:21:28.189 --rc geninfo_all_blocks=1 00:21:28.189 --rc geninfo_unexecuted_blocks=1 00:21:28.189 00:21:28.189 ' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:28.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.189 --rc genhtml_branch_coverage=1 00:21:28.189 --rc genhtml_function_coverage=1 00:21:28.189 --rc genhtml_legend=1 00:21:28.189 --rc geninfo_all_blocks=1 00:21:28.189 --rc geninfo_unexecuted_blocks=1 00:21:28.189 00:21:28.189 ' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:28.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.189 --rc genhtml_branch_coverage=1 00:21:28.189 --rc genhtml_function_coverage=1 00:21:28.189 --rc genhtml_legend=1 00:21:28.189 --rc geninfo_all_blocks=1 00:21:28.189 --rc geninfo_unexecuted_blocks=1 00:21:28.189 00:21:28.189 ' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:28.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.189 --rc genhtml_branch_coverage=1 00:21:28.189 --rc genhtml_function_coverage=1 00:21:28.189 --rc genhtml_legend=1 00:21:28.189 --rc geninfo_all_blocks=1 00:21:28.189 --rc geninfo_unexecuted_blocks=1 00:21:28.189 00:21:28.189 ' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.189 17:39:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:34.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:34.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.760 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:34.761 Found net devices under 0000:86:00.0: cvl_0_0 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:34.761 Found net devices under 0000:86:00.1: cvl_0_1 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.761 17:39:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:34.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:21:34.761 00:21:34.761 --- 10.0.0.2 ping statistics --- 00:21:34.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.761 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:21:34.761 00:21:34.761 --- 10.0.0.1 ping statistics --- 00:21:34.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.761 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3529476 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3529476 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3529476 ']' 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.761 17:39:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.761 [2024-11-19 17:39:36.345645] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:34.761 [2024-11-19 17:39:36.345697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.761 [2024-11-19 17:39:36.424490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.761 [2024-11-19 17:39:36.466545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.761 [2024-11-19 17:39:36.466583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.761 [2024-11-19 17:39:36.466592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.761 [2024-11-19 17:39:36.466598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.761 [2024-11-19 17:39:36.466602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.761 [2024-11-19 17:39:36.468206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.761 [2024-11-19 17:39:36.468322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.761 [2024-11-19 17:39:36.468410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.761 [2024-11-19 17:39:36.468409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.020 [2024-11-19 17:39:37.231906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.020 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.279 Malloc0 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.279 [2024-11-19 17:39:37.301815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.279 [ 00:21:35.279 { 00:21:35.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:35.279 "subtype": "Discovery", 00:21:35.279 "listen_addresses": [], 00:21:35.279 "allow_any_host": true, 00:21:35.279 "hosts": [] 00:21:35.279 }, 00:21:35.279 { 00:21:35.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.279 "subtype": "NVMe", 00:21:35.279 "listen_addresses": [ 00:21:35.279 { 00:21:35.279 "trtype": "TCP", 00:21:35.279 "adrfam": "IPv4", 00:21:35.279 "traddr": "10.0.0.2", 00:21:35.279 "trsvcid": "4420" 00:21:35.279 } 00:21:35.279 ], 00:21:35.279 "allow_any_host": true, 00:21:35.279 "hosts": [], 00:21:35.279 "serial_number": "SPDK00000000000001", 00:21:35.279 "model_number": "SPDK bdev Controller", 00:21:35.279 "max_namespaces": 2, 00:21:35.279 "min_cntlid": 1, 00:21:35.279 "max_cntlid": 65519, 00:21:35.279 "namespaces": [ 00:21:35.279 { 00:21:35.279 "nsid": 1, 00:21:35.279 "bdev_name": "Malloc0", 00:21:35.279 "name": "Malloc0", 00:21:35.279 "nguid": "4192219F7FCE4FB0808C66B59D3FAB53", 00:21:35.279 "uuid": "4192219f-7fce-4fb0-808c-66b59d3fab53" 00:21:35.279 } 00:21:35.279 ] 00:21:35.279 } 00:21:35.279 ] 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3529725 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:35.279 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 Malloc1 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.538 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 [ 00:21:35.538 { 00:21:35.538 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:35.538 "subtype": "Discovery", 00:21:35.538 "listen_addresses": [], 00:21:35.538 "allow_any_host": true, 00:21:35.538 "hosts": [] 00:21:35.538 }, 00:21:35.538 { 00:21:35.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.538 "subtype": "NVMe", 00:21:35.538 "listen_addresses": [ 00:21:35.538 { 00:21:35.538 "trtype": "TCP", 00:21:35.539 "adrfam": "IPv4", 00:21:35.539 "traddr": "10.0.0.2", 00:21:35.539 "trsvcid": "4420" 00:21:35.539 } 00:21:35.539 ], 00:21:35.539 "allow_any_host": true, 00:21:35.539 "hosts": [], 00:21:35.539 "serial_number": "SPDK00000000000001", 00:21:35.539 "model_number": "SPDK bdev Controller", 00:21:35.539 "max_namespaces": 2, 00:21:35.539 "min_cntlid": 1, 00:21:35.539 "max_cntlid": 65519, 00:21:35.539 Asynchronous Event Request test 00:21:35.539 Attaching to 10.0.0.2 00:21:35.539 Attached to 10.0.0.2 00:21:35.539 Registering asynchronous event callbacks... 00:21:35.539 Starting namespace attribute notice tests for all controllers... 00:21:35.539 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:35.539 aer_cb - Changed Namespace 00:21:35.539 Cleaning up... 00:21:35.539 "namespaces": [ 00:21:35.539 { 00:21:35.539 "nsid": 1, 00:21:35.539 "bdev_name": "Malloc0", 00:21:35.539 "name": "Malloc0", 00:21:35.539 "nguid": "4192219F7FCE4FB0808C66B59D3FAB53", 00:21:35.539 "uuid": "4192219f-7fce-4fb0-808c-66b59d3fab53" 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "nsid": 2, 00:21:35.539 "bdev_name": "Malloc1", 00:21:35.539 "name": "Malloc1", 00:21:35.539 "nguid": "0BADCFDA545943D6860EDDF98E57EC85", 00:21:35.539 "uuid": "0badcfda-5459-43d6-860e-ddf98e57ec85" 00:21:35.539 } 00:21:35.539 ] 00:21:35.539 } 00:21:35.539 ] 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3529725 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.539 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.798 rmmod nvme_tcp 00:21:35.798 rmmod nvme_fabrics 00:21:35.798 rmmod nvme_keyring 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3529476 ']' 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3529476 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3529476 ']' 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3529476 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3529476 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3529476' 00:21:35.798 killing process with pid 3529476 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3529476 00:21:35.798 17:39:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3529476 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.057 17:39:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.962 17:39:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.962 00:21:37.962 real 0m9.991s 00:21:37.962 user 0m8.235s 00:21:37.962 sys 0m4.955s 00:21:37.962 17:39:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.962 17:39:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:37.962 ************************************ 00:21:37.962 END TEST nvmf_aer 00:21:37.962 ************************************ 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.222 ************************************ 00:21:38.222 START TEST nvmf_async_init 00:21:38.222 ************************************ 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:38.222 * Looking for test storage... 00:21:38.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.222 --rc genhtml_branch_coverage=1 00:21:38.222 --rc genhtml_function_coverage=1 00:21:38.222 --rc genhtml_legend=1 00:21:38.222 --rc geninfo_all_blocks=1 00:21:38.222 --rc geninfo_unexecuted_blocks=1 00:21:38.222 00:21:38.222 ' 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.222 --rc genhtml_branch_coverage=1 00:21:38.222 --rc genhtml_function_coverage=1 00:21:38.222 --rc genhtml_legend=1 00:21:38.222 --rc geninfo_all_blocks=1 00:21:38.222 --rc geninfo_unexecuted_blocks=1 00:21:38.222 00:21:38.222 ' 00:21:38.222 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.222 --rc genhtml_branch_coverage=1 00:21:38.222 --rc genhtml_function_coverage=1 00:21:38.222 --rc genhtml_legend=1 00:21:38.222 --rc geninfo_all_blocks=1 00:21:38.223 --rc geninfo_unexecuted_blocks=1 00:21:38.223 00:21:38.223 ' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.223 --rc genhtml_branch_coverage=1 00:21:38.223 --rc genhtml_function_coverage=1 00:21:38.223 --rc genhtml_legend=1 00:21:38.223 --rc geninfo_all_blocks=1 00:21:38.223 --rc geninfo_unexecuted_blocks=1 00:21:38.223 00:21:38.223 ' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9d2b1c2d76f44abb9cd002955b12aab1 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.223 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.482 17:39:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:45.052 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:45.052 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:45.052 Found net devices under 0000:86:00.0: cvl_0_0 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:45.052 Found net devices under 0000:86:00.1: cvl_0_1 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:45.052 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:21:45.053 00:21:45.053 --- 10.0.0.2 ping statistics --- 00:21:45.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.053 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:21:45.053 00:21:45.053 --- 10.0.0.1 ping statistics --- 00:21:45.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.053 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3533261 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3533261 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3533261 ']' 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 [2024-11-19 17:39:46.451801] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:45.053 [2024-11-19 17:39:46.451852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.053 [2024-11-19 17:39:46.532294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.053 [2024-11-19 17:39:46.573454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.053 [2024-11-19 17:39:46.573492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.053 [2024-11-19 17:39:46.573500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.053 [2024-11-19 17:39:46.573506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.053 [2024-11-19 17:39:46.573514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.053 [2024-11-19 17:39:46.574072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 [2024-11-19 17:39:46.704277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 null0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9d2b1c2d76f44abb9cd002955b12aab1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 [2024-11-19 17:39:46.756539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 nvme0n1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:45.053 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 [ 00:21:45.054 { 00:21:45.054 "name": "nvme0n1", 00:21:45.054 "aliases": [ 00:21:45.054 "9d2b1c2d-76f4-4abb-9cd0-02955b12aab1" 00:21:45.054 ], 00:21:45.054 "product_name": "NVMe disk", 00:21:45.054 "block_size": 512, 00:21:45.054 "num_blocks": 2097152, 00:21:45.054 "uuid": "9d2b1c2d-76f4-4abb-9cd0-02955b12aab1", 00:21:45.054 "numa_id": 1, 00:21:45.054 "assigned_rate_limits": { 00:21:45.054 "rw_ios_per_sec": 0, 00:21:45.054 "rw_mbytes_per_sec": 0, 00:21:45.054 "r_mbytes_per_sec": 0, 00:21:45.054 "w_mbytes_per_sec": 0 00:21:45.054 }, 00:21:45.054 "claimed": false, 00:21:45.054 "zoned": false, 00:21:45.054 "supported_io_types": { 00:21:45.054 "read": true, 00:21:45.054 "write": true, 00:21:45.054 "unmap": false, 00:21:45.054 "flush": true, 00:21:45.054 "reset": true, 00:21:45.054 "nvme_admin": true, 00:21:45.054 "nvme_io": true, 00:21:45.054 "nvme_io_md": false, 00:21:45.054 "write_zeroes": true, 00:21:45.054 "zcopy": false, 00:21:45.054 "get_zone_info": false, 00:21:45.054 "zone_management": false, 00:21:45.054 "zone_append": false, 00:21:45.054 "compare": true, 00:21:45.054 "compare_and_write": true, 00:21:45.054 "abort": true, 00:21:45.054 "seek_hole": false, 00:21:45.054 "seek_data": false, 00:21:45.054 "copy": true, 00:21:45.054 "nvme_iov_md": false 00:21:45.054 }, 00:21:45.054 "memory_domains": [ 00:21:45.054 { 00:21:45.054 "dma_device_id": "system", 00:21:45.054 "dma_device_type": 1 00:21:45.054 } 00:21:45.054 ], 00:21:45.054 "driver_specific": { 00:21:45.054 "nvme": [ 00:21:45.054 { 00:21:45.054 "trid": { 00:21:45.054 "trtype": "TCP", 00:21:45.054 "adrfam": "IPv4", 00:21:45.054 "traddr": "10.0.0.2", 00:21:45.054 "trsvcid": "4420", 00:21:45.054 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:45.054 }, 00:21:45.054 "ctrlr_data": { 00:21:45.054 "cntlid": 1, 00:21:45.054 "vendor_id": "0x8086", 00:21:45.054 "model_number": "SPDK bdev Controller", 00:21:45.054 "serial_number": "00000000000000000000", 00:21:45.054 "firmware_revision": "25.01", 00:21:45.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.054 "oacs": { 00:21:45.054 "security": 0, 00:21:45.054 "format": 0, 00:21:45.054 "firmware": 0, 00:21:45.054 "ns_manage": 0 00:21:45.054 }, 00:21:45.054 "multi_ctrlr": true, 00:21:45.054 "ana_reporting": false 00:21:45.054 }, 00:21:45.054 "vs": { 00:21:45.054 "nvme_version": "1.3" 00:21:45.054 }, 00:21:45.054 "ns_data": { 00:21:45.054 "id": 1, 00:21:45.054 "can_share": true 00:21:45.054 } 00:21:45.054 } 00:21:45.054 ], 00:21:45.054 "mp_policy": "active_passive" 00:21:45.054 } 00:21:45.054 } 00:21:45.054 ] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 [2024-11-19 17:39:47.021080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.054 [2024-11-19 17:39:47.021136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fc220 (9): Bad file descriptor 00:21:45.054 [2024-11-19 17:39:47.155024] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 [ 00:21:45.054 { 00:21:45.054 "name": "nvme0n1", 00:21:45.054 "aliases": [ 00:21:45.054 "9d2b1c2d-76f4-4abb-9cd0-02955b12aab1" 00:21:45.054 ], 00:21:45.054 "product_name": "NVMe disk", 00:21:45.054 "block_size": 512, 00:21:45.054 "num_blocks": 2097152, 00:21:45.054 "uuid": "9d2b1c2d-76f4-4abb-9cd0-02955b12aab1", 00:21:45.054 "numa_id": 1, 00:21:45.054 "assigned_rate_limits": { 00:21:45.054 "rw_ios_per_sec": 0, 00:21:45.054 "rw_mbytes_per_sec": 0, 00:21:45.054 "r_mbytes_per_sec": 0, 00:21:45.054 "w_mbytes_per_sec": 0 00:21:45.054 }, 00:21:45.054 "claimed": false, 00:21:45.054 "zoned": false, 00:21:45.054 "supported_io_types": { 00:21:45.054 "read": true, 00:21:45.054 "write": true, 00:21:45.054 "unmap": false, 00:21:45.054 "flush": true, 00:21:45.054 "reset": true, 00:21:45.054 "nvme_admin": true, 00:21:45.054 "nvme_io": true, 00:21:45.054 "nvme_io_md": false, 00:21:45.054 "write_zeroes": true, 00:21:45.054 "zcopy": false, 00:21:45.054 "get_zone_info": false, 00:21:45.054 "zone_management": false, 00:21:45.054 "zone_append": false, 00:21:45.054 "compare": true, 00:21:45.054 "compare_and_write": true, 00:21:45.054 "abort": true, 00:21:45.054 "seek_hole": false, 00:21:45.054 "seek_data": false, 00:21:45.054 "copy": true, 00:21:45.054 "nvme_iov_md": false 00:21:45.054 }, 00:21:45.054 "memory_domains": [ 00:21:45.054 { 00:21:45.054 "dma_device_id": "system", 00:21:45.054 "dma_device_type": 1 00:21:45.054 } 00:21:45.054 ], 00:21:45.054 "driver_specific": { 00:21:45.054 "nvme": [ 00:21:45.054 { 00:21:45.054 "trid": { 00:21:45.054 "trtype": "TCP", 00:21:45.054 "adrfam": "IPv4", 00:21:45.054 "traddr": "10.0.0.2", 00:21:45.054 "trsvcid": "4420", 00:21:45.054 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:45.054 }, 00:21:45.054 "ctrlr_data": { 00:21:45.054 "cntlid": 2, 00:21:45.054 "vendor_id": "0x8086", 00:21:45.054 "model_number": "SPDK bdev Controller", 00:21:45.054 "serial_number": "00000000000000000000", 00:21:45.054 "firmware_revision": "25.01", 00:21:45.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.054 "oacs": { 00:21:45.054 "security": 0, 00:21:45.054 "format": 0, 00:21:45.054 "firmware": 0, 00:21:45.054 "ns_manage": 0 00:21:45.054 }, 00:21:45.054 "multi_ctrlr": true, 00:21:45.054 "ana_reporting": false 00:21:45.054 }, 00:21:45.054 "vs": { 00:21:45.054 "nvme_version": "1.3" 00:21:45.054 }, 00:21:45.054 "ns_data": { 00:21:45.054 "id": 1, 00:21:45.054 "can_share": true 00:21:45.054 } 00:21:45.054 } 00:21:45.054 ], 00:21:45.054 "mp_policy": "active_passive" 00:21:45.054 } 00:21:45.054 } 00:21:45.054 ] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.o4HLtjdrub 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.o4HLtjdrub 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.o4HLtjdrub 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 [2024-11-19 17:39:47.229708] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.054 [2024-11-19 17:39:47.229796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.054 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:45.055 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.055 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.055 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.055 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:45.055 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.055 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.055 [2024-11-19 17:39:47.249778] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.313 nvme0n1 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.313 [ 00:21:45.313 { 00:21:45.313 "name": "nvme0n1", 00:21:45.313 "aliases": [ 00:21:45.313 "9d2b1c2d-76f4-4abb-9cd0-02955b12aab1" 00:21:45.313 ], 00:21:45.313 "product_name": "NVMe disk", 00:21:45.313 "block_size": 512, 00:21:45.313 "num_blocks": 2097152, 00:21:45.313 "uuid": "9d2b1c2d-76f4-4abb-9cd0-02955b12aab1", 00:21:45.313 "numa_id": 1, 00:21:45.313 "assigned_rate_limits": { 00:21:45.313 "rw_ios_per_sec": 0, 00:21:45.313 "rw_mbytes_per_sec": 0, 00:21:45.313 "r_mbytes_per_sec": 0, 00:21:45.313 "w_mbytes_per_sec": 0 00:21:45.313 }, 00:21:45.313 "claimed": false, 00:21:45.313 "zoned": false, 00:21:45.313 "supported_io_types": { 00:21:45.313 "read": true, 00:21:45.313 "write": true, 00:21:45.313 "unmap": false, 00:21:45.313 "flush": true, 00:21:45.313 "reset": true, 00:21:45.313 "nvme_admin": true, 00:21:45.313 "nvme_io": true, 00:21:45.313 "nvme_io_md": false, 00:21:45.313 "write_zeroes": true, 00:21:45.313 "zcopy": false, 00:21:45.313 "get_zone_info": false, 00:21:45.313 "zone_management": false, 00:21:45.313 "zone_append": false, 00:21:45.313 "compare": true, 00:21:45.313 "compare_and_write": true, 00:21:45.313 "abort": true, 00:21:45.313 "seek_hole": false, 00:21:45.313 "seek_data": false, 00:21:45.313 "copy": true, 00:21:45.313 "nvme_iov_md": false 00:21:45.313 }, 00:21:45.313 "memory_domains": [ 00:21:45.313 { 00:21:45.313 "dma_device_id": "system", 00:21:45.313 "dma_device_type": 1 00:21:45.313 } 00:21:45.313 ], 00:21:45.313 "driver_specific": { 00:21:45.313 "nvme": [ 00:21:45.313 { 00:21:45.313 "trid": { 00:21:45.313 "trtype": "TCP", 00:21:45.313 "adrfam": "IPv4", 00:21:45.313 "traddr": "10.0.0.2", 00:21:45.313 "trsvcid": "4421", 00:21:45.313 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:45.313 }, 00:21:45.313 "ctrlr_data": { 00:21:45.313 "cntlid": 3, 00:21:45.313 "vendor_id": "0x8086", 00:21:45.313 "model_number": "SPDK bdev Controller", 00:21:45.313 "serial_number": "00000000000000000000", 00:21:45.313 "firmware_revision": "25.01", 00:21:45.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.313 "oacs": { 00:21:45.313 "security": 0, 00:21:45.313 "format": 0, 00:21:45.313 "firmware": 0, 00:21:45.313 "ns_manage": 0 00:21:45.313 }, 00:21:45.313 "multi_ctrlr": true, 00:21:45.313 "ana_reporting": false 00:21:45.313 }, 00:21:45.313 "vs": { 00:21:45.313 "nvme_version": "1.3" 00:21:45.313 }, 00:21:45.313 "ns_data": { 00:21:45.313 "id": 1, 00:21:45.313 "can_share": true 00:21:45.313 } 00:21:45.313 } 00:21:45.313 ], 00:21:45.313 "mp_policy": "active_passive" 00:21:45.313 } 00:21:45.313 } 00:21:45.313 ] 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.o4HLtjdrub 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:45.313 rmmod nvme_tcp 00:21:45.313 rmmod nvme_fabrics 00:21:45.313 rmmod nvme_keyring 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:45.313 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3533261 ']' 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3533261 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3533261 ']' 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3533261 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3533261 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3533261' 00:21:45.314 killing process with pid 3533261 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3533261 00:21:45.314 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3533261 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.573 17:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.478 17:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.478 00:21:47.478 real 0m9.473s 00:21:47.478 user 0m3.131s 00:21:47.478 sys 0m4.782s 00:21:47.478 17:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.478 17:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.478 ************************************ 00:21:47.478 END TEST nvmf_async_init 00:21:47.478 ************************************ 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.738 ************************************ 00:21:47.738 START TEST dma 00:21:47.738 ************************************ 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:47.738 * Looking for test storage... 00:21:47.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.738 --rc genhtml_branch_coverage=1 00:21:47.738 --rc genhtml_function_coverage=1 00:21:47.738 --rc genhtml_legend=1 00:21:47.738 --rc geninfo_all_blocks=1 00:21:47.738 --rc geninfo_unexecuted_blocks=1 00:21:47.738 00:21:47.738 ' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.738 --rc genhtml_branch_coverage=1 00:21:47.738 --rc genhtml_function_coverage=1 00:21:47.738 --rc genhtml_legend=1 00:21:47.738 --rc geninfo_all_blocks=1 00:21:47.738 --rc geninfo_unexecuted_blocks=1 00:21:47.738 00:21:47.738 ' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.738 --rc genhtml_branch_coverage=1 00:21:47.738 --rc genhtml_function_coverage=1 00:21:47.738 --rc genhtml_legend=1 00:21:47.738 --rc geninfo_all_blocks=1 00:21:47.738 --rc geninfo_unexecuted_blocks=1 00:21:47.738 00:21:47.738 ' 00:21:47.738 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.739 --rc genhtml_branch_coverage=1 00:21:47.739 --rc genhtml_function_coverage=1 00:21:47.739 --rc genhtml_legend=1 00:21:47.739 --rc geninfo_all_blocks=1 00:21:47.739 --rc geninfo_unexecuted_blocks=1 00:21:47.739 00:21:47.739 ' 00:21:47.739 17:39:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.739 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:47.998 00:21:47.998 real 0m0.214s 00:21:47.998 user 0m0.124s 00:21:47.998 sys 0m0.104s 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.998 17:39:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:47.998 ************************************ 00:21:47.998 END TEST dma 00:21:47.998 ************************************ 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.998 ************************************ 00:21:47.998 START TEST nvmf_identify 00:21:47.998 ************************************ 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:47.998 * Looking for test storage... 00:21:47.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.998 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:47.999 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.258 --rc genhtml_branch_coverage=1 00:21:48.258 --rc genhtml_function_coverage=1 00:21:48.258 --rc genhtml_legend=1 00:21:48.258 --rc geninfo_all_blocks=1 00:21:48.258 --rc geninfo_unexecuted_blocks=1 00:21:48.258 00:21:48.258 ' 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.258 --rc genhtml_branch_coverage=1 00:21:48.258 --rc genhtml_function_coverage=1 00:21:48.258 --rc genhtml_legend=1 00:21:48.258 --rc geninfo_all_blocks=1 00:21:48.258 --rc geninfo_unexecuted_blocks=1 00:21:48.258 00:21:48.258 ' 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.258 --rc genhtml_branch_coverage=1 00:21:48.258 --rc genhtml_function_coverage=1 00:21:48.258 --rc genhtml_legend=1 00:21:48.258 --rc geninfo_all_blocks=1 00:21:48.258 --rc geninfo_unexecuted_blocks=1 00:21:48.258 00:21:48.258 ' 00:21:48.258 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:48.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.259 --rc genhtml_branch_coverage=1 00:21:48.259 --rc genhtml_function_coverage=1 00:21:48.259 --rc genhtml_legend=1 00:21:48.259 --rc geninfo_all_blocks=1 00:21:48.259 --rc geninfo_unexecuted_blocks=1 00:21:48.259 00:21:48.259 ' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.259 17:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:54.832 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:54.832 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.832 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:54.833 Found net devices under 0000:86:00.0: cvl_0_0 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:54.833 Found net devices under 0000:86:00.1: cvl_0_1 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.833 17:39:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:21:54.833 00:21:54.833 --- 10.0.0.2 ping statistics --- 00:21:54.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.833 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:54.833 00:21:54.833 --- 10.0.0.1 ping statistics --- 00:21:54.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.833 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3537079 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3537079 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3537079 ']' 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 [2024-11-19 17:39:56.245397] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:54.833 [2024-11-19 17:39:56.245440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.833 [2024-11-19 17:39:56.324635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.833 [2024-11-19 17:39:56.370151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.833 [2024-11-19 17:39:56.370185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.833 [2024-11-19 17:39:56.370193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.833 [2024-11-19 17:39:56.370199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.833 [2024-11-19 17:39:56.370204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.833 [2024-11-19 17:39:56.371619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.833 [2024-11-19 17:39:56.371729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.833 [2024-11-19 17:39:56.371836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.833 [2024-11-19 17:39:56.371836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 [2024-11-19 17:39:56.471987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.833 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.833 Malloc0 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 [2024-11-19 17:39:56.568814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.834 [ 00:21:54.834 { 00:21:54.834 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.834 "subtype": "Discovery", 00:21:54.834 "listen_addresses": [ 00:21:54.834 { 00:21:54.834 "trtype": "TCP", 00:21:54.834 "adrfam": "IPv4", 00:21:54.834 "traddr": "10.0.0.2", 00:21:54.834 "trsvcid": "4420" 00:21:54.834 } 00:21:54.834 ], 00:21:54.834 "allow_any_host": true, 00:21:54.834 "hosts": [] 00:21:54.834 }, 00:21:54.834 { 00:21:54.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.834 "subtype": "NVMe", 00:21:54.834 "listen_addresses": [ 00:21:54.834 { 00:21:54.834 "trtype": "TCP", 00:21:54.834 "adrfam": "IPv4", 00:21:54.834 "traddr": "10.0.0.2", 00:21:54.834 "trsvcid": "4420" 00:21:54.834 } 00:21:54.834 ], 00:21:54.834 "allow_any_host": true, 00:21:54.834 "hosts": [], 00:21:54.834 "serial_number": "SPDK00000000000001", 00:21:54.834 "model_number": "SPDK bdev Controller", 00:21:54.834 "max_namespaces": 32, 00:21:54.834 "min_cntlid": 1, 00:21:54.834 "max_cntlid": 65519, 00:21:54.834 "namespaces": [ 00:21:54.834 { 00:21:54.834 "nsid": 1, 00:21:54.834 "bdev_name": "Malloc0", 00:21:54.834 "name": "Malloc0", 00:21:54.834 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:54.834 "eui64": "ABCDEF0123456789", 00:21:54.834 "uuid": "e7fbf89b-07fa-4f81-9f6c-be32aab95bc0" 00:21:54.834 } 00:21:54.834 ] 00:21:54.834 } 00:21:54.834 ] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.834 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:54.834 [2024-11-19 17:39:56.621207] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:54.834 [2024-11-19 17:39:56.621240] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537108 ] 00:21:54.834 [2024-11-19 17:39:56.662889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:54.834 [2024-11-19 17:39:56.662939] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:54.834 [2024-11-19 17:39:56.666952] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:54.834 [2024-11-19 17:39:56.666964] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:54.834 [2024-11-19 17:39:56.666974] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:54.834 [2024-11-19 17:39:56.667579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:54.834 [2024-11-19 17:39:56.667614] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1483690 0 00:21:54.834 [2024-11-19 17:39:56.681958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:54.834 [2024-11-19 17:39:56.681972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:54.834 [2024-11-19 17:39:56.681977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:54.834 [2024-11-19 17:39:56.681981] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:54.834 [2024-11-19 17:39:56.682013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.682019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.682023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.834 [2024-11-19 17:39:56.682035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:54.834 [2024-11-19 17:39:56.682053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.834 [2024-11-19 17:39:56.689958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.834 [2024-11-19 17:39:56.689966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.834 [2024-11-19 17:39:56.689970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.689974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.834 [2024-11-19 17:39:56.689986] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:54.834 [2024-11-19 17:39:56.689993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:54.834 [2024-11-19 17:39:56.689998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:54.834 [2024-11-19 17:39:56.690010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.834 [2024-11-19 17:39:56.690025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.834 [2024-11-19 17:39:56.690038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.834 [2024-11-19 17:39:56.690198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.834 [2024-11-19 17:39:56.690204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.834 [2024-11-19 17:39:56.690207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.834 [2024-11-19 17:39:56.690215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:54.834 [2024-11-19 17:39:56.690222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:54.834 [2024-11-19 17:39:56.690229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.834 [2024-11-19 17:39:56.690245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.834 [2024-11-19 17:39:56.690255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.834 [2024-11-19 17:39:56.690319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.834 [2024-11-19 17:39:56.690325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.834 [2024-11-19 17:39:56.690328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.834 [2024-11-19 17:39:56.690336] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:54.834 [2024-11-19 17:39:56.690343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:54.834 [2024-11-19 17:39:56.690348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.834 [2024-11-19 17:39:56.690360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.834 [2024-11-19 17:39:56.690370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.834 [2024-11-19 17:39:56.690429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.834 [2024-11-19 17:39:56.690435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.834 [2024-11-19 17:39:56.690438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.834 [2024-11-19 17:39:56.690446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:54.834 [2024-11-19 17:39:56.690454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.834 [2024-11-19 17:39:56.690461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.834 [2024-11-19 17:39:56.690466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.835 [2024-11-19 17:39:56.690476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.835 [2024-11-19 17:39:56.690538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.835 [2024-11-19 17:39:56.690544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.835 [2024-11-19 17:39:56.690547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.835 [2024-11-19 17:39:56.690555] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:54.835 [2024-11-19 17:39:56.690559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:54.835 [2024-11-19 17:39:56.690566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:54.835 [2024-11-19 17:39:56.690673] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:54.835 [2024-11-19 17:39:56.690678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:54.835 [2024-11-19 17:39:56.690687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.690700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.835 [2024-11-19 17:39:56.690710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.835 [2024-11-19 17:39:56.690778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.835 [2024-11-19 17:39:56.690783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.835 [2024-11-19 17:39:56.690786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.835 [2024-11-19 17:39:56.690794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:54.835 [2024-11-19 17:39:56.690802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.690814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.835 [2024-11-19 17:39:56.690823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.835 [2024-11-19 17:39:56.690882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.835 [2024-11-19 17:39:56.690888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.835 [2024-11-19 17:39:56.690891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.835 [2024-11-19 17:39:56.690898] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:54.835 [2024-11-19 17:39:56.690903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:54.835 [2024-11-19 17:39:56.690909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:54.835 [2024-11-19 17:39:56.690919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:54.835 [2024-11-19 17:39:56.690927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.690930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.690936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.835 [2024-11-19 17:39:56.690946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.835 [2024-11-19 17:39:56.691038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.835 [2024-11-19 17:39:56.691044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.835 [2024-11-19 17:39:56.691047] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691051] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1483690): datao=0, datal=4096, cccid=0 00:21:54.835 [2024-11-19 17:39:56.691055] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e5100) on tqpair(0x1483690): expected_datao=0, payload_size=4096 00:21:54.835 [2024-11-19 17:39:56.691061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691072] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.835 [2024-11-19 17:39:56.691088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.835 [2024-11-19 17:39:56.691091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.835 [2024-11-19 17:39:56.691102] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:54.835 [2024-11-19 17:39:56.691107] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:54.835 [2024-11-19 17:39:56.691111] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:54.835 [2024-11-19 17:39:56.691118] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:54.835 [2024-11-19 17:39:56.691122] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:54.835 [2024-11-19 17:39:56.691127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:54.835 [2024-11-19 17:39:56.691137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:54.835 [2024-11-19 17:39:56.691144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.691156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.835 [2024-11-19 17:39:56.691167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.835 [2024-11-19 17:39:56.691233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.835 [2024-11-19 17:39:56.691238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.835 [2024-11-19 17:39:56.691242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.835 [2024-11-19 17:39:56.691251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.691263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.835 [2024-11-19 17:39:56.691268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.691279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.835 [2024-11-19 17:39:56.691285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.691296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.835 [2024-11-19 17:39:56.691303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.691314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.835 [2024-11-19 17:39:56.691319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:54.835 [2024-11-19 17:39:56.691327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:54.835 [2024-11-19 17:39:56.691332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1483690) 00:21:54.835 [2024-11-19 17:39:56.691341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.835 [2024-11-19 17:39:56.691352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5100, cid 0, qid 0 00:21:54.835 [2024-11-19 17:39:56.691357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5280, cid 1, qid 0 00:21:54.835 [2024-11-19 17:39:56.691361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5400, cid 2, qid 0 00:21:54.835 [2024-11-19 17:39:56.691365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.835 [2024-11-19 17:39:56.691369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5700, cid 4, qid 0 00:21:54.835 [2024-11-19 17:39:56.691468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.835 [2024-11-19 17:39:56.691474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.835 [2024-11-19 17:39:56.691477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.835 [2024-11-19 17:39:56.691480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5700) on tqpair=0x1483690 00:21:54.835 [2024-11-19 17:39:56.691487] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:54.836 [2024-11-19 17:39:56.691492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:54.836 [2024-11-19 17:39:56.691501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.691504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1483690) 00:21:54.836 [2024-11-19 17:39:56.691510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.836 [2024-11-19 17:39:56.691520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5700, cid 4, qid 0 00:21:54.836 [2024-11-19 17:39:56.691597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.836 [2024-11-19 17:39:56.691603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.836 [2024-11-19 17:39:56.691607] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.691610] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1483690): datao=0, datal=4096, cccid=4 00:21:54.836 [2024-11-19 17:39:56.691613] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e5700) on tqpair(0x1483690): expected_datao=0, payload_size=4096 00:21:54.836 [2024-11-19 17:39:56.691617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.691629] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.691633] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.836 [2024-11-19 17:39:56.732108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.836 [2024-11-19 17:39:56.732112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5700) on tqpair=0x1483690 00:21:54.836 [2024-11-19 17:39:56.732130] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:54.836 [2024-11-19 17:39:56.732154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1483690) 00:21:54.836 [2024-11-19 17:39:56.732167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.836 [2024-11-19 17:39:56.732174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1483690) 00:21:54.836 [2024-11-19 17:39:56.732186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.836 [2024-11-19 17:39:56.732201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5700, cid 4, qid 0 00:21:54.836 [2024-11-19 17:39:56.732207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5880, cid 5, qid 0 00:21:54.836 [2024-11-19 17:39:56.732306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.836 [2024-11-19 17:39:56.732312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.836 [2024-11-19 17:39:56.732315] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732319] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1483690): datao=0, datal=1024, cccid=4 00:21:54.836 [2024-11-19 17:39:56.732323] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e5700) on tqpair(0x1483690): expected_datao=0, payload_size=1024 00:21:54.836 [2024-11-19 17:39:56.732327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732333] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732336] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.836 [2024-11-19 17:39:56.732346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.836 [2024-11-19 17:39:56.732349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.732353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5880) on tqpair=0x1483690 00:21:54.836 [2024-11-19 17:39:56.777958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.836 [2024-11-19 17:39:56.777968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.836 [2024-11-19 17:39:56.777971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.777975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5700) on tqpair=0x1483690 00:21:54.836 [2024-11-19 17:39:56.777986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.777990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1483690) 00:21:54.836 [2024-11-19 17:39:56.777997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.836 [2024-11-19 17:39:56.778013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5700, cid 4, qid 0 00:21:54.836 [2024-11-19 17:39:56.778183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.836 [2024-11-19 17:39:56.778189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.836 [2024-11-19 17:39:56.778192] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778198] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1483690): datao=0, datal=3072, cccid=4 00:21:54.836 [2024-11-19 17:39:56.778203] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e5700) on tqpair(0x1483690): expected_datao=0, payload_size=3072 00:21:54.836 [2024-11-19 17:39:56.778207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778213] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778216] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.836 [2024-11-19 17:39:56.778238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.836 [2024-11-19 17:39:56.778241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5700) on tqpair=0x1483690 00:21:54.836 [2024-11-19 17:39:56.778252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1483690) 00:21:54.836 [2024-11-19 17:39:56.778262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.836 [2024-11-19 17:39:56.778275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5700, cid 4, qid 0 00:21:54.836 [2024-11-19 17:39:56.778347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.836 [2024-11-19 17:39:56.778353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.836 [2024-11-19 17:39:56.778356] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778359] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1483690): datao=0, datal=8, cccid=4 00:21:54.836 [2024-11-19 17:39:56.778363] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e5700) on tqpair(0x1483690): expected_datao=0, payload_size=8 00:21:54.836 [2024-11-19 17:39:56.778366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778372] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.778375] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.820086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.836 [2024-11-19 17:39:56.820095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.836 [2024-11-19 17:39:56.820098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.836 [2024-11-19 17:39:56.820102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5700) on tqpair=0x1483690 00:21:54.836 ===================================================== 00:21:54.836 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:54.836 ===================================================== 00:21:54.836 Controller Capabilities/Features 00:21:54.836 ================================ 00:21:54.836 Vendor ID: 0000 00:21:54.836 Subsystem Vendor ID: 0000 00:21:54.836 Serial Number: .................... 00:21:54.836 Model Number: ........................................ 00:21:54.836 Firmware Version: 25.01 00:21:54.836 Recommended Arb Burst: 0 00:21:54.836 IEEE OUI Identifier: 00 00 00 00:21:54.836 Multi-path I/O 00:21:54.836 May have multiple subsystem ports: No 00:21:54.836 May have multiple controllers: No 00:21:54.836 Associated with SR-IOV VF: No 00:21:54.836 Max Data Transfer Size: 131072 00:21:54.836 Max Number of Namespaces: 0 00:21:54.836 Max Number of I/O Queues: 1024 00:21:54.836 NVMe Specification Version (VS): 1.3 00:21:54.836 NVMe Specification Version (Identify): 1.3 00:21:54.836 Maximum Queue Entries: 128 00:21:54.836 Contiguous Queues Required: Yes 00:21:54.836 Arbitration Mechanisms Supported 00:21:54.836 Weighted Round Robin: Not Supported 00:21:54.836 Vendor Specific: Not Supported 00:21:54.836 Reset Timeout: 15000 ms 00:21:54.836 Doorbell Stride: 4 bytes 00:21:54.836 NVM Subsystem Reset: Not Supported 00:21:54.836 Command Sets Supported 00:21:54.836 NVM Command Set: Supported 00:21:54.836 Boot Partition: Not Supported 00:21:54.836 Memory Page Size Minimum: 4096 bytes 00:21:54.836 Memory Page Size Maximum: 4096 bytes 00:21:54.836 Persistent Memory Region: Not Supported 00:21:54.836 Optional Asynchronous Events Supported 00:21:54.836 Namespace Attribute Notices: Not Supported 00:21:54.836 Firmware Activation Notices: Not Supported 00:21:54.836 ANA Change Notices: Not Supported 00:21:54.836 PLE Aggregate Log Change Notices: Not Supported 00:21:54.836 LBA Status Info Alert Notices: Not Supported 00:21:54.836 EGE Aggregate Log Change Notices: Not Supported 00:21:54.836 Normal NVM Subsystem Shutdown event: Not Supported 00:21:54.836 Zone Descriptor Change Notices: Not Supported 00:21:54.836 Discovery Log Change Notices: Supported 00:21:54.836 Controller Attributes 00:21:54.836 128-bit Host Identifier: Not Supported 00:21:54.836 Non-Operational Permissive Mode: Not Supported 00:21:54.836 NVM Sets: Not Supported 00:21:54.836 Read Recovery Levels: Not Supported 00:21:54.836 Endurance Groups: Not Supported 00:21:54.836 Predictable Latency Mode: Not Supported 00:21:54.836 Traffic Based Keep ALive: Not Supported 00:21:54.837 Namespace Granularity: Not Supported 00:21:54.837 SQ Associations: Not Supported 00:21:54.837 UUID List: Not Supported 00:21:54.837 Multi-Domain Subsystem: Not Supported 00:21:54.837 Fixed Capacity Management: Not Supported 00:21:54.837 Variable Capacity Management: Not Supported 00:21:54.837 Delete Endurance Group: Not Supported 00:21:54.837 Delete NVM Set: Not Supported 00:21:54.837 Extended LBA Formats Supported: Not Supported 00:21:54.837 Flexible Data Placement Supported: Not Supported 00:21:54.837 00:21:54.837 Controller Memory Buffer Support 00:21:54.837 ================================ 00:21:54.837 Supported: No 00:21:54.837 00:21:54.837 Persistent Memory Region Support 00:21:54.837 ================================ 00:21:54.837 Supported: No 00:21:54.837 00:21:54.837 Admin Command Set Attributes 00:21:54.837 ============================ 00:21:54.837 Security Send/Receive: Not Supported 00:21:54.837 Format NVM: Not Supported 00:21:54.837 Firmware Activate/Download: Not Supported 00:21:54.837 Namespace Management: Not Supported 00:21:54.837 Device Self-Test: Not Supported 00:21:54.837 Directives: Not Supported 00:21:54.837 NVMe-MI: Not Supported 00:21:54.837 Virtualization Management: Not Supported 00:21:54.837 Doorbell Buffer Config: Not Supported 00:21:54.837 Get LBA Status Capability: Not Supported 00:21:54.837 Command & Feature Lockdown Capability: Not Supported 00:21:54.837 Abort Command Limit: 1 00:21:54.837 Async Event Request Limit: 4 00:21:54.837 Number of Firmware Slots: N/A 00:21:54.837 Firmware Slot 1 Read-Only: N/A 00:21:54.837 Firmware Activation Without Reset: N/A 00:21:54.837 Multiple Update Detection Support: N/A 00:21:54.837 Firmware Update Granularity: No Information Provided 00:21:54.837 Per-Namespace SMART Log: No 00:21:54.837 Asymmetric Namespace Access Log Page: Not Supported 00:21:54.837 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:54.837 Command Effects Log Page: Not Supported 00:21:54.837 Get Log Page Extended Data: Supported 00:21:54.837 Telemetry Log Pages: Not Supported 00:21:54.837 Persistent Event Log Pages: Not Supported 00:21:54.837 Supported Log Pages Log Page: May Support 00:21:54.837 Commands Supported & Effects Log Page: Not Supported 00:21:54.837 Feature Identifiers & Effects Log Page:May Support 00:21:54.837 NVMe-MI Commands & Effects Log Page: May Support 00:21:54.837 Data Area 4 for Telemetry Log: Not Supported 00:21:54.837 Error Log Page Entries Supported: 128 00:21:54.837 Keep Alive: Not Supported 00:21:54.837 00:21:54.837 NVM Command Set Attributes 00:21:54.837 ========================== 00:21:54.837 Submission Queue Entry Size 00:21:54.837 Max: 1 00:21:54.837 Min: 1 00:21:54.837 Completion Queue Entry Size 00:21:54.837 Max: 1 00:21:54.837 Min: 1 00:21:54.837 Number of Namespaces: 0 00:21:54.837 Compare Command: Not Supported 00:21:54.837 Write Uncorrectable Command: Not Supported 00:21:54.837 Dataset Management Command: Not Supported 00:21:54.837 Write Zeroes Command: Not Supported 00:21:54.837 Set Features Save Field: Not Supported 00:21:54.837 Reservations: Not Supported 00:21:54.837 Timestamp: Not Supported 00:21:54.837 Copy: Not Supported 00:21:54.837 Volatile Write Cache: Not Present 00:21:54.837 Atomic Write Unit (Normal): 1 00:21:54.837 Atomic Write Unit (PFail): 1 00:21:54.837 Atomic Compare & Write Unit: 1 00:21:54.837 Fused Compare & Write: Supported 00:21:54.837 Scatter-Gather List 00:21:54.837 SGL Command Set: Supported 00:21:54.837 SGL Keyed: Supported 00:21:54.837 SGL Bit Bucket Descriptor: Not Supported 00:21:54.837 SGL Metadata Pointer: Not Supported 00:21:54.837 Oversized SGL: Not Supported 00:21:54.837 SGL Metadata Address: Not Supported 00:21:54.837 SGL Offset: Supported 00:21:54.837 Transport SGL Data Block: Not Supported 00:21:54.837 Replay Protected Memory Block: Not Supported 00:21:54.837 00:21:54.837 Firmware Slot Information 00:21:54.837 ========================= 00:21:54.837 Active slot: 0 00:21:54.837 00:21:54.837 00:21:54.837 Error Log 00:21:54.837 ========= 00:21:54.837 00:21:54.837 Active Namespaces 00:21:54.837 ================= 00:21:54.837 Discovery Log Page 00:21:54.837 ================== 00:21:54.837 Generation Counter: 2 00:21:54.837 Number of Records: 2 00:21:54.837 Record Format: 0 00:21:54.837 00:21:54.837 Discovery Log Entry 0 00:21:54.837 ---------------------- 00:21:54.837 Transport Type: 3 (TCP) 00:21:54.837 Address Family: 1 (IPv4) 00:21:54.837 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:54.837 Entry Flags: 00:21:54.837 Duplicate Returned Information: 1 00:21:54.837 Explicit Persistent Connection Support for Discovery: 1 00:21:54.837 Transport Requirements: 00:21:54.837 Secure Channel: Not Required 00:21:54.837 Port ID: 0 (0x0000) 00:21:54.837 Controller ID: 65535 (0xffff) 00:21:54.837 Admin Max SQ Size: 128 00:21:54.837 Transport Service Identifier: 4420 00:21:54.837 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:54.837 Transport Address: 10.0.0.2 00:21:54.837 Discovery Log Entry 1 00:21:54.837 ---------------------- 00:21:54.837 Transport Type: 3 (TCP) 00:21:54.837 Address Family: 1 (IPv4) 00:21:54.837 Subsystem Type: 2 (NVM Subsystem) 00:21:54.837 Entry Flags: 00:21:54.837 Duplicate Returned Information: 0 00:21:54.837 Explicit Persistent Connection Support for Discovery: 0 00:21:54.837 Transport Requirements: 00:21:54.837 Secure Channel: Not Required 00:21:54.837 Port ID: 0 (0x0000) 00:21:54.837 Controller ID: 65535 (0xffff) 00:21:54.837 Admin Max SQ Size: 128 00:21:54.837 Transport Service Identifier: 4420 00:21:54.837 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:54.837 Transport Address: 10.0.0.2 [2024-11-19 17:39:56.820188] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:54.837 [2024-11-19 17:39:56.820199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5100) on tqpair=0x1483690 00:21:54.837 [2024-11-19 17:39:56.820205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.837 [2024-11-19 17:39:56.820210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5280) on tqpair=0x1483690 00:21:54.837 [2024-11-19 17:39:56.820214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.837 [2024-11-19 17:39:56.820218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5400) on tqpair=0x1483690 00:21:54.837 [2024-11-19 17:39:56.820222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.837 [2024-11-19 17:39:56.820227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.837 [2024-11-19 17:39:56.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.837 [2024-11-19 17:39:56.820243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.837 [2024-11-19 17:39:56.820247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.837 [2024-11-19 17:39:56.820250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.820332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.820338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.820341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.820350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.820449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.820455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.820458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.820465] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:54.838 [2024-11-19 17:39:56.820470] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:54.838 [2024-11-19 17:39:56.820477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.820561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.820566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.820569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.820581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.820682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.820687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.820692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.820705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.820791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.820796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.820799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.820811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.820896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.820902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.820905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.820917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.820924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.820929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.820939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.821002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.821009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.821012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.821023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.821035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.821045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.821113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.821119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.821122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.821135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.821148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.821157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.821221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.821227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.821230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.821242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.821254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.821264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.821322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.821327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.821330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.821342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.821354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.821363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.821428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.821434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.838 [2024-11-19 17:39:56.821437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.838 [2024-11-19 17:39:56.821448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.838 [2024-11-19 17:39:56.821455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.838 [2024-11-19 17:39:56.821461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.838 [2024-11-19 17:39:56.821470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.838 [2024-11-19 17:39:56.821533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.838 [2024-11-19 17:39:56.821539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.821542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.839 [2024-11-19 17:39:56.821556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.839 [2024-11-19 17:39:56.821568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.821578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.839 [2024-11-19 17:39:56.821636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.821641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.821644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.839 [2024-11-19 17:39:56.821656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.839 [2024-11-19 17:39:56.821668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.821677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.839 [2024-11-19 17:39:56.821744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.821749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.821753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.839 [2024-11-19 17:39:56.821764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.839 [2024-11-19 17:39:56.821777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.821786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.839 [2024-11-19 17:39:56.821845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.821851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.821854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.839 [2024-11-19 17:39:56.821865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.821871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.839 [2024-11-19 17:39:56.821877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.821886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.839 [2024-11-19 17:39:56.825957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.825965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.825968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.825971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.839 [2024-11-19 17:39:56.825981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.825987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.825990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1483690) 00:21:54.839 [2024-11-19 17:39:56.825996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.826008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e5580, cid 3, qid 0 00:21:54.839 [2024-11-19 17:39:56.826158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.826164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.826167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.826170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e5580) on tqpair=0x1483690 00:21:54.839 [2024-11-19 17:39:56.826177] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:54.839 00:21:54.839 17:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:54.839 [2024-11-19 17:39:56.863903] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:21:54.839 [2024-11-19 17:39:56.863938] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537110 ] 00:21:54.839 [2024-11-19 17:39:56.903928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:54.839 [2024-11-19 17:39:56.907976] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:54.839 [2024-11-19 17:39:56.907983] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:54.839 [2024-11-19 17:39:56.907993] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:54.839 [2024-11-19 17:39:56.908001] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:54.839 [2024-11-19 17:39:56.908394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:54.839 [2024-11-19 17:39:56.908420] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fd2690 0 00:21:54.839 [2024-11-19 17:39:56.914963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:54.839 [2024-11-19 17:39:56.914976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:54.839 [2024-11-19 17:39:56.914981] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:54.839 [2024-11-19 17:39:56.914984] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:54.839 [2024-11-19 17:39:56.915008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.915013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.915017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.839 [2024-11-19 17:39:56.915027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:54.839 [2024-11-19 17:39:56.915042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.839 [2024-11-19 17:39:56.922956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.922964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.922968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.922974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.839 [2024-11-19 17:39:56.922986] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:54.839 [2024-11-19 17:39:56.922992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:54.839 [2024-11-19 17:39:56.922997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:54.839 [2024-11-19 17:39:56.923007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.839 [2024-11-19 17:39:56.923022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.923034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.839 [2024-11-19 17:39:56.923135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.923140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.923143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.839 [2024-11-19 17:39:56.923151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:54.839 [2024-11-19 17:39:56.923159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:54.839 [2024-11-19 17:39:56.923165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.839 [2024-11-19 17:39:56.923178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.839 [2024-11-19 17:39:56.923188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.839 [2024-11-19 17:39:56.923255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.839 [2024-11-19 17:39:56.923261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.839 [2024-11-19 17:39:56.923264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.839 [2024-11-19 17:39:56.923272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:54.839 [2024-11-19 17:39:56.923278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:54.839 [2024-11-19 17:39:56.923284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.839 [2024-11-19 17:39:56.923291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.839 [2024-11-19 17:39:56.923297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.840 [2024-11-19 17:39:56.923307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.840 [2024-11-19 17:39:56.923368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.840 [2024-11-19 17:39:56.923374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.840 [2024-11-19 17:39:56.923377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.840 [2024-11-19 17:39:56.923387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:54.840 [2024-11-19 17:39:56.923395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.923408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.840 [2024-11-19 17:39:56.923418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.840 [2024-11-19 17:39:56.923482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.840 [2024-11-19 17:39:56.923488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.840 [2024-11-19 17:39:56.923491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.840 [2024-11-19 17:39:56.923497] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:54.840 [2024-11-19 17:39:56.923502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:54.840 [2024-11-19 17:39:56.923509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:54.840 [2024-11-19 17:39:56.923616] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:54.840 [2024-11-19 17:39:56.923620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:54.840 [2024-11-19 17:39:56.923627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.923639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.840 [2024-11-19 17:39:56.923649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.840 [2024-11-19 17:39:56.923714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.840 [2024-11-19 17:39:56.923720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.840 [2024-11-19 17:39:56.923723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.840 [2024-11-19 17:39:56.923730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:54.840 [2024-11-19 17:39:56.923738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.923751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.840 [2024-11-19 17:39:56.923760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.840 [2024-11-19 17:39:56.923824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.840 [2024-11-19 17:39:56.923830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.840 [2024-11-19 17:39:56.923833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.840 [2024-11-19 17:39:56.923842] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:54.840 [2024-11-19 17:39:56.923846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:54.840 [2024-11-19 17:39:56.923853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:54.840 [2024-11-19 17:39:56.923862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:54.840 [2024-11-19 17:39:56.923869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.923878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.840 [2024-11-19 17:39:56.923888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.840 [2024-11-19 17:39:56.923982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.840 [2024-11-19 17:39:56.923988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.840 [2024-11-19 17:39:56.923991] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.923994] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=4096, cccid=0 00:21:54.840 [2024-11-19 17:39:56.923998] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034100) on tqpair(0x1fd2690): expected_datao=0, payload_size=4096 00:21:54.840 [2024-11-19 17:39:56.924002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924019] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924023] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.840 [2024-11-19 17:39:56.924065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.840 [2024-11-19 17:39:56.924068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.840 [2024-11-19 17:39:56.924078] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:54.840 [2024-11-19 17:39:56.924082] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:54.840 [2024-11-19 17:39:56.924086] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:54.840 [2024-11-19 17:39:56.924092] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:54.840 [2024-11-19 17:39:56.924096] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:54.840 [2024-11-19 17:39:56.924100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:54.840 [2024-11-19 17:39:56.924110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:54.840 [2024-11-19 17:39:56.924116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.924129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.840 [2024-11-19 17:39:56.924140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.840 [2024-11-19 17:39:56.924204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.840 [2024-11-19 17:39:56.924210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.840 [2024-11-19 17:39:56.924214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:54.840 [2024-11-19 17:39:56.924222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.924234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.840 [2024-11-19 17:39:56.924239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.924251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.840 [2024-11-19 17:39:56.924256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.924267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.840 [2024-11-19 17:39:56.924272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.924283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.840 [2024-11-19 17:39:56.924288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:54.840 [2024-11-19 17:39:56.924296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:54.840 [2024-11-19 17:39:56.924301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.840 [2024-11-19 17:39:56.924305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:54.840 [2024-11-19 17:39:56.924311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.840 [2024-11-19 17:39:56.924321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034100, cid 0, qid 0 00:21:54.841 [2024-11-19 17:39:56.924326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034280, cid 1, qid 0 00:21:54.841 [2024-11-19 17:39:56.924330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034400, cid 2, qid 0 00:21:54.841 [2024-11-19 17:39:56.924334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034580, cid 3, qid 0 00:21:54.841 [2024-11-19 17:39:56.924338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:54.841 [2024-11-19 17:39:56.924441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.841 [2024-11-19 17:39:56.924447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.841 [2024-11-19 17:39:56.924450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:54.841 [2024-11-19 17:39:56.924461] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:54.841 [2024-11-19 17:39:56.924466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.924473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.924479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.924484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:54.841 [2024-11-19 17:39:56.924496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.841 [2024-11-19 17:39:56.924506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:54.841 [2024-11-19 17:39:56.924567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.841 [2024-11-19 17:39:56.924573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.841 [2024-11-19 17:39:56.924576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:54.841 [2024-11-19 17:39:56.924632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.924642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.924649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:54.841 [2024-11-19 17:39:56.924658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.841 [2024-11-19 17:39:56.924668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:54.841 [2024-11-19 17:39:56.924745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.841 [2024-11-19 17:39:56.924751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.841 [2024-11-19 17:39:56.924754] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924757] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=4096, cccid=4 00:21:54.841 [2024-11-19 17:39:56.924761] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034700) on tqpair(0x1fd2690): expected_datao=0, payload_size=4096 00:21:54.841 [2024-11-19 17:39:56.924765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924776] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.924779] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.967956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.841 [2024-11-19 17:39:56.967968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.841 [2024-11-19 17:39:56.967972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.967975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:54.841 [2024-11-19 17:39:56.967987] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:54.841 [2024-11-19 17:39:56.967999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.968009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:56.968016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.968019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:54.841 [2024-11-19 17:39:56.968027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.841 [2024-11-19 17:39:56.968039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:54.841 [2024-11-19 17:39:56.968209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.841 [2024-11-19 17:39:56.968215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.841 [2024-11-19 17:39:56.968218] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.968221] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=4096, cccid=4 00:21:54.841 [2024-11-19 17:39:56.968225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034700) on tqpair(0x1fd2690): expected_datao=0, payload_size=4096 00:21:54.841 [2024-11-19 17:39:56.968229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.968242] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:56.968247] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:57.009014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.841 [2024-11-19 17:39:57.009025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.841 [2024-11-19 17:39:57.009028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:57.009031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:54.841 [2024-11-19 17:39:57.009046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:57.009055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:54.841 [2024-11-19 17:39:57.009062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:57.009066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:54.841 [2024-11-19 17:39:57.009072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.841 [2024-11-19 17:39:57.009084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:54.841 [2024-11-19 17:39:57.009154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.841 [2024-11-19 17:39:57.009160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.841 [2024-11-19 17:39:57.009163] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:57.009166] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=4096, cccid=4 00:21:54.841 [2024-11-19 17:39:57.009170] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034700) on tqpair(0x1fd2690): expected_datao=0, payload_size=4096 00:21:54.841 [2024-11-19 17:39:57.009174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:57.009186] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.841 [2024-11-19 17:39:57.009191] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:55.102 [2024-11-19 17:39:57.051027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.102 [2024-11-19 17:39:57.051038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.102 [2024-11-19 17:39:57.051043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.102 [2024-11-19 17:39:57.051047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:55.102 [2024-11-19 17:39:57.051055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051091] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:55.102 [2024-11-19 17:39:57.051095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:55.102 [2024-11-19 17:39:57.051100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:55.102 [2024-11-19 17:39:57.051114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.102 [2024-11-19 17:39:57.051118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:55.102 [2024-11-19 17:39:57.051124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.103 [2024-11-19 17:39:57.051155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:55.103 [2024-11-19 17:39:57.051160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034880, cid 5, qid 0 00:21:55.103 [2024-11-19 17:39:57.051239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.051244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.051247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.051257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.051262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.051265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034880) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.051276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034880, cid 5, qid 0 00:21:55.103 [2024-11-19 17:39:57.051383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.051389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.051392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034880) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.051404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034880, cid 5, qid 0 00:21:55.103 [2024-11-19 17:39:57.051491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.051497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.051500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034880) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.051512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034880, cid 5, qid 0 00:21:55.103 [2024-11-19 17:39:57.051601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.051607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.051610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034880) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.051626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd2690) 00:21:55.103 [2024-11-19 17:39:57.051679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.103 [2024-11-19 17:39:57.051690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034880, cid 5, qid 0 00:21:55.103 [2024-11-19 17:39:57.051699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034700, cid 4, qid 0 00:21:55.103 [2024-11-19 17:39:57.051704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034a00, cid 6, qid 0 00:21:55.103 [2024-11-19 17:39:57.051708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034b80, cid 7, qid 0 00:21:55.103 [2024-11-19 17:39:57.051840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:55.103 [2024-11-19 17:39:57.051846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:55.103 [2024-11-19 17:39:57.051849] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051852] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=8192, cccid=5 00:21:55.103 [2024-11-19 17:39:57.051856] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034880) on tqpair(0x1fd2690): expected_datao=0, payload_size=8192 00:21:55.103 [2024-11-19 17:39:57.051860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051901] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:55.103 [2024-11-19 17:39:57.051911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:55.103 [2024-11-19 17:39:57.051913] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051916] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=512, cccid=4 00:21:55.103 [2024-11-19 17:39:57.051920] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034700) on tqpair(0x1fd2690): expected_datao=0, payload_size=512 00:21:55.103 [2024-11-19 17:39:57.051924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051929] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.051937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:55.103 [2024-11-19 17:39:57.051942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:55.103 [2024-11-19 17:39:57.051945] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.055955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=512, cccid=6 00:21:55.103 [2024-11-19 17:39:57.055959] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034a00) on tqpair(0x1fd2690): expected_datao=0, payload_size=512 00:21:55.103 [2024-11-19 17:39:57.055962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.055968] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.055971] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.055976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:55.103 [2024-11-19 17:39:57.055980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:55.103 [2024-11-19 17:39:57.055983] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.055987] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2690): datao=0, datal=4096, cccid=7 00:21:55.103 [2024-11-19 17:39:57.055990] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2034b80) on tqpair(0x1fd2690): expected_datao=0, payload_size=4096 00:21:55.103 [2024-11-19 17:39:57.055994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056000] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056003] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.056015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.056018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034880) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.056034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.056039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.056042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034700) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.056054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.056059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.056062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034a00) on tqpair=0x1fd2690 00:21:55.103 [2024-11-19 17:39:57.056071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.103 [2024-11-19 17:39:57.056076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.103 [2024-11-19 17:39:57.056079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.103 [2024-11-19 17:39:57.056082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034b80) on tqpair=0x1fd2690 00:21:55.103 ===================================================== 00:21:55.103 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.103 ===================================================== 00:21:55.103 Controller Capabilities/Features 00:21:55.103 ================================ 00:21:55.104 Vendor ID: 8086 00:21:55.104 Subsystem Vendor ID: 8086 00:21:55.104 Serial Number: SPDK00000000000001 00:21:55.104 Model Number: SPDK bdev Controller 00:21:55.104 Firmware Version: 25.01 00:21:55.104 Recommended Arb Burst: 6 00:21:55.104 IEEE OUI Identifier: e4 d2 5c 00:21:55.104 Multi-path I/O 00:21:55.104 May have multiple subsystem ports: Yes 00:21:55.104 May have multiple controllers: Yes 00:21:55.104 Associated with SR-IOV VF: No 00:21:55.104 Max Data Transfer Size: 131072 00:21:55.104 Max Number of Namespaces: 32 00:21:55.104 Max Number of I/O Queues: 127 00:21:55.104 NVMe Specification Version (VS): 1.3 00:21:55.104 NVMe Specification Version (Identify): 1.3 00:21:55.104 Maximum Queue Entries: 128 00:21:55.104 Contiguous Queues Required: Yes 00:21:55.104 Arbitration Mechanisms Supported 00:21:55.104 Weighted Round Robin: Not Supported 00:21:55.104 Vendor Specific: Not Supported 00:21:55.104 Reset Timeout: 15000 ms 00:21:55.104 Doorbell Stride: 4 bytes 00:21:55.104 NVM Subsystem Reset: Not Supported 00:21:55.104 Command Sets Supported 00:21:55.104 NVM Command Set: Supported 00:21:55.104 Boot Partition: Not Supported 00:21:55.104 Memory Page Size Minimum: 4096 bytes 00:21:55.104 Memory Page Size Maximum: 4096 bytes 00:21:55.104 Persistent Memory Region: Not Supported 00:21:55.104 Optional Asynchronous Events Supported 00:21:55.104 Namespace Attribute Notices: Supported 00:21:55.104 Firmware Activation Notices: Not Supported 00:21:55.104 ANA Change Notices: Not Supported 00:21:55.104 PLE Aggregate Log Change Notices: Not Supported 00:21:55.104 LBA Status Info Alert Notices: Not Supported 00:21:55.104 EGE Aggregate Log Change Notices: Not Supported 00:21:55.104 Normal NVM Subsystem Shutdown event: Not Supported 00:21:55.104 Zone Descriptor Change Notices: Not Supported 00:21:55.104 Discovery Log Change Notices: Not Supported 00:21:55.104 Controller Attributes 00:21:55.104 128-bit Host Identifier: Supported 00:21:55.104 Non-Operational Permissive Mode: Not Supported 00:21:55.104 NVM Sets: Not Supported 00:21:55.104 Read Recovery Levels: Not Supported 00:21:55.104 Endurance Groups: Not Supported 00:21:55.104 Predictable Latency Mode: Not Supported 00:21:55.104 Traffic Based Keep ALive: Not Supported 00:21:55.104 Namespace Granularity: Not Supported 00:21:55.104 SQ Associations: Not Supported 00:21:55.104 UUID List: Not Supported 00:21:55.104 Multi-Domain Subsystem: Not Supported 00:21:55.104 Fixed Capacity Management: Not Supported 00:21:55.104 Variable Capacity Management: Not Supported 00:21:55.104 Delete Endurance Group: Not Supported 00:21:55.104 Delete NVM Set: Not Supported 00:21:55.104 Extended LBA Formats Supported: Not Supported 00:21:55.104 Flexible Data Placement Supported: Not Supported 00:21:55.104 00:21:55.104 Controller Memory Buffer Support 00:21:55.104 ================================ 00:21:55.104 Supported: No 00:21:55.104 00:21:55.104 Persistent Memory Region Support 00:21:55.104 ================================ 00:21:55.104 Supported: No 00:21:55.104 00:21:55.104 Admin Command Set Attributes 00:21:55.104 ============================ 00:21:55.104 Security Send/Receive: Not Supported 00:21:55.104 Format NVM: Not Supported 00:21:55.104 Firmware Activate/Download: Not Supported 00:21:55.104 Namespace Management: Not Supported 00:21:55.104 Device Self-Test: Not Supported 00:21:55.104 Directives: Not Supported 00:21:55.104 NVMe-MI: Not Supported 00:21:55.104 Virtualization Management: Not Supported 00:21:55.104 Doorbell Buffer Config: Not Supported 00:21:55.104 Get LBA Status Capability: Not Supported 00:21:55.104 Command & Feature Lockdown Capability: Not Supported 00:21:55.104 Abort Command Limit: 4 00:21:55.104 Async Event Request Limit: 4 00:21:55.104 Number of Firmware Slots: N/A 00:21:55.104 Firmware Slot 1 Read-Only: N/A 00:21:55.104 Firmware Activation Without Reset: N/A 00:21:55.104 Multiple Update Detection Support: N/A 00:21:55.104 Firmware Update Granularity: No Information Provided 00:21:55.104 Per-Namespace SMART Log: No 00:21:55.104 Asymmetric Namespace Access Log Page: Not Supported 00:21:55.104 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:55.104 Command Effects Log Page: Supported 00:21:55.104 Get Log Page Extended Data: Supported 00:21:55.104 Telemetry Log Pages: Not Supported 00:21:55.104 Persistent Event Log Pages: Not Supported 00:21:55.104 Supported Log Pages Log Page: May Support 00:21:55.104 Commands Supported & Effects Log Page: Not Supported 00:21:55.104 Feature Identifiers & Effects Log Page:May Support 00:21:55.104 NVMe-MI Commands & Effects Log Page: May Support 00:21:55.104 Data Area 4 for Telemetry Log: Not Supported 00:21:55.104 Error Log Page Entries Supported: 128 00:21:55.104 Keep Alive: Supported 00:21:55.104 Keep Alive Granularity: 10000 ms 00:21:55.104 00:21:55.104 NVM Command Set Attributes 00:21:55.104 ========================== 00:21:55.104 Submission Queue Entry Size 00:21:55.104 Max: 64 00:21:55.104 Min: 64 00:21:55.104 Completion Queue Entry Size 00:21:55.104 Max: 16 00:21:55.104 Min: 16 00:21:55.104 Number of Namespaces: 32 00:21:55.104 Compare Command: Supported 00:21:55.104 Write Uncorrectable Command: Not Supported 00:21:55.104 Dataset Management Command: Supported 00:21:55.104 Write Zeroes Command: Supported 00:21:55.104 Set Features Save Field: Not Supported 00:21:55.104 Reservations: Supported 00:21:55.104 Timestamp: Not Supported 00:21:55.104 Copy: Supported 00:21:55.104 Volatile Write Cache: Present 00:21:55.104 Atomic Write Unit (Normal): 1 00:21:55.104 Atomic Write Unit (PFail): 1 00:21:55.104 Atomic Compare & Write Unit: 1 00:21:55.104 Fused Compare & Write: Supported 00:21:55.104 Scatter-Gather List 00:21:55.104 SGL Command Set: Supported 00:21:55.104 SGL Keyed: Supported 00:21:55.104 SGL Bit Bucket Descriptor: Not Supported 00:21:55.104 SGL Metadata Pointer: Not Supported 00:21:55.104 Oversized SGL: Not Supported 00:21:55.104 SGL Metadata Address: Not Supported 00:21:55.104 SGL Offset: Supported 00:21:55.104 Transport SGL Data Block: Not Supported 00:21:55.104 Replay Protected Memory Block: Not Supported 00:21:55.104 00:21:55.104 Firmware Slot Information 00:21:55.104 ========================= 00:21:55.104 Active slot: 1 00:21:55.104 Slot 1 Firmware Revision: 25.01 00:21:55.104 00:21:55.104 00:21:55.104 Commands Supported and Effects 00:21:55.104 ============================== 00:21:55.104 Admin Commands 00:21:55.104 -------------- 00:21:55.104 Get Log Page (02h): Supported 00:21:55.104 Identify (06h): Supported 00:21:55.104 Abort (08h): Supported 00:21:55.104 Set Features (09h): Supported 00:21:55.104 Get Features (0Ah): Supported 00:21:55.104 Asynchronous Event Request (0Ch): Supported 00:21:55.104 Keep Alive (18h): Supported 00:21:55.104 I/O Commands 00:21:55.104 ------------ 00:21:55.104 Flush (00h): Supported LBA-Change 00:21:55.104 Write (01h): Supported LBA-Change 00:21:55.104 Read (02h): Supported 00:21:55.104 Compare (05h): Supported 00:21:55.104 Write Zeroes (08h): Supported LBA-Change 00:21:55.104 Dataset Management (09h): Supported LBA-Change 00:21:55.104 Copy (19h): Supported LBA-Change 00:21:55.104 00:21:55.104 Error Log 00:21:55.104 ========= 00:21:55.104 00:21:55.104 Arbitration 00:21:55.104 =========== 00:21:55.104 Arbitration Burst: 1 00:21:55.104 00:21:55.104 Power Management 00:21:55.104 ================ 00:21:55.104 Number of Power States: 1 00:21:55.104 Current Power State: Power State #0 00:21:55.104 Power State #0: 00:21:55.104 Max Power: 0.00 W 00:21:55.104 Non-Operational State: Operational 00:21:55.104 Entry Latency: Not Reported 00:21:55.104 Exit Latency: Not Reported 00:21:55.104 Relative Read Throughput: 0 00:21:55.104 Relative Read Latency: 0 00:21:55.104 Relative Write Throughput: 0 00:21:55.104 Relative Write Latency: 0 00:21:55.104 Idle Power: Not Reported 00:21:55.104 Active Power: Not Reported 00:21:55.104 Non-Operational Permissive Mode: Not Supported 00:21:55.104 00:21:55.104 Health Information 00:21:55.104 ================== 00:21:55.104 Critical Warnings: 00:21:55.104 Available Spare Space: OK 00:21:55.104 Temperature: OK 00:21:55.104 Device Reliability: OK 00:21:55.104 Read Only: No 00:21:55.104 Volatile Memory Backup: OK 00:21:55.104 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:55.104 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:55.104 Available Spare: 0% 00:21:55.104 Available Spare Threshold: 0% 00:21:55.105 Life Percentage Used:[2024-11-19 17:39:57.056166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd2690) 00:21:55.105 [2024-11-19 17:39:57.056178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.105 [2024-11-19 17:39:57.056190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034b80, cid 7, qid 0 00:21:55.105 [2024-11-19 17:39:57.056270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.105 [2024-11-19 17:39:57.056276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.105 [2024-11-19 17:39:57.056279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034b80) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056310] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:55.105 [2024-11-19 17:39:57.056320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034100) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.105 [2024-11-19 17:39:57.056330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034280) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.105 [2024-11-19 17:39:57.056338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034400) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.105 [2024-11-19 17:39:57.056346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034580) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.105 [2024-11-19 17:39:57.056357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2690) 00:21:55.105 [2024-11-19 17:39:57.056370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.105 [2024-11-19 17:39:57.056384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034580, cid 3, qid 0 00:21:55.105 [2024-11-19 17:39:57.056448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.105 [2024-11-19 17:39:57.056454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.105 [2024-11-19 17:39:57.056457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034580) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2690) 00:21:55.105 [2024-11-19 17:39:57.056478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.105 [2024-11-19 17:39:57.056491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034580, cid 3, qid 0 00:21:55.105 [2024-11-19 17:39:57.056560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.105 [2024-11-19 17:39:57.056565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.105 [2024-11-19 17:39:57.056568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034580) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056576] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:55.105 [2024-11-19 17:39:57.056580] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:55.105 [2024-11-19 17:39:57.056588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2690) 00:21:55.105 [2024-11-19 17:39:57.056600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.105 [2024-11-19 17:39:57.056609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034580, cid 3, qid 0 00:21:55.105 [2024-11-19 17:39:57.056671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.105 [2024-11-19 17:39:57.056676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.105 [2024-11-19 17:39:57.056679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034580) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.056691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.056698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2690) 00:21:55.105 [2024-11-19 17:39:57.056703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.105 [2024-11-19 17:39:57.056713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034580, cid 3, qid 0 00:21:55.105 [2024-11-19 17:39:57.059956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.105 [2024-11-19 17:39:57.059964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.105 [2024-11-19 17:39:57.059967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.059970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034580) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.059980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.059984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.059987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2690) 00:21:55.105 [2024-11-19 17:39:57.059995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.105 [2024-11-19 17:39:57.060006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2034580, cid 3, qid 0 00:21:55.105 [2024-11-19 17:39:57.060079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:55.105 [2024-11-19 17:39:57.060085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:55.105 [2024-11-19 17:39:57.060088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:55.105 [2024-11-19 17:39:57.060091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2034580) on tqpair=0x1fd2690 00:21:55.105 [2024-11-19 17:39:57.060098] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 3 milliseconds 00:21:55.105 0% 00:21:55.105 Data Units Read: 0 00:21:55.105 Data Units Written: 0 00:21:55.105 Host Read Commands: 0 00:21:55.105 Host Write Commands: 0 00:21:55.105 Controller Busy Time: 0 minutes 00:21:55.105 Power Cycles: 0 00:21:55.105 Power On Hours: 0 hours 00:21:55.105 Unsafe Shutdowns: 0 00:21:55.105 Unrecoverable Media Errors: 0 00:21:55.105 Lifetime Error Log Entries: 0 00:21:55.105 Warning Temperature Time: 0 minutes 00:21:55.105 Critical Temperature Time: 0 minutes 00:21:55.105 00:21:55.105 Number of Queues 00:21:55.105 ================ 00:21:55.105 Number of I/O Submission Queues: 127 00:21:55.105 Number of I/O Completion Queues: 127 00:21:55.105 00:21:55.105 Active Namespaces 00:21:55.105 ================= 00:21:55.105 Namespace ID:1 00:21:55.105 Error Recovery Timeout: Unlimited 00:21:55.105 Command Set Identifier: NVM (00h) 00:21:55.105 Deallocate: Supported 00:21:55.105 Deallocated/Unwritten Error: Not Supported 00:21:55.105 Deallocated Read Value: Unknown 00:21:55.105 Deallocate in Write Zeroes: Not Supported 00:21:55.105 Deallocated Guard Field: 0xFFFF 00:21:55.105 Flush: Supported 00:21:55.105 Reservation: Supported 00:21:55.105 Namespace Sharing Capabilities: Multiple Controllers 00:21:55.105 Size (in LBAs): 131072 (0GiB) 00:21:55.105 Capacity (in LBAs): 131072 (0GiB) 00:21:55.105 Utilization (in LBAs): 131072 (0GiB) 00:21:55.105 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:55.105 EUI64: ABCDEF0123456789 00:21:55.105 UUID: e7fbf89b-07fa-4f81-9f6c-be32aab95bc0 00:21:55.105 Thin Provisioning: Not Supported 00:21:55.105 Per-NS Atomic Units: Yes 00:21:55.105 Atomic Boundary Size (Normal): 0 00:21:55.105 Atomic Boundary Size (PFail): 0 00:21:55.105 Atomic Boundary Offset: 0 00:21:55.105 Maximum Single Source Range Length: 65535 00:21:55.105 Maximum Copy Length: 65535 00:21:55.105 Maximum Source Range Count: 1 00:21:55.105 NGUID/EUI64 Never Reused: No 00:21:55.105 Namespace Write Protected: No 00:21:55.105 Number of LBA Formats: 1 00:21:55.105 Current LBA Format: LBA Format #00 00:21:55.105 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:55.105 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:55.105 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:55.106 rmmod nvme_tcp 00:21:55.106 rmmod nvme_fabrics 00:21:55.106 rmmod nvme_keyring 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3537079 ']' 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3537079 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3537079 ']' 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3537079 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3537079 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3537079' 00:21:55.106 killing process with pid 3537079 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3537079 00:21:55.106 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3537079 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.365 17:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.270 17:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.270 00:21:57.270 real 0m9.409s 00:21:57.270 user 0m5.812s 00:21:57.270 sys 0m4.851s 00:21:57.270 17:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.270 17:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.271 ************************************ 00:21:57.271 END TEST nvmf_identify 00:21:57.271 ************************************ 00:21:57.530 17:39:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:57.530 17:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.530 17:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.531 ************************************ 00:21:57.531 START TEST nvmf_perf 00:21:57.531 ************************************ 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:57.531 * Looking for test storage... 00:21:57.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.531 --rc genhtml_branch_coverage=1 00:21:57.531 --rc genhtml_function_coverage=1 00:21:57.531 --rc genhtml_legend=1 00:21:57.531 --rc geninfo_all_blocks=1 00:21:57.531 --rc geninfo_unexecuted_blocks=1 00:21:57.531 00:21:57.531 ' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.531 --rc genhtml_branch_coverage=1 00:21:57.531 --rc genhtml_function_coverage=1 00:21:57.531 --rc genhtml_legend=1 00:21:57.531 --rc geninfo_all_blocks=1 00:21:57.531 --rc geninfo_unexecuted_blocks=1 00:21:57.531 00:21:57.531 ' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.531 --rc genhtml_branch_coverage=1 00:21:57.531 --rc genhtml_function_coverage=1 00:21:57.531 --rc genhtml_legend=1 00:21:57.531 --rc geninfo_all_blocks=1 00:21:57.531 --rc geninfo_unexecuted_blocks=1 00:21:57.531 00:21:57.531 ' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.531 --rc genhtml_branch_coverage=1 00:21:57.531 --rc genhtml_function_coverage=1 00:21:57.531 --rc genhtml_legend=1 00:21:57.531 --rc geninfo_all_blocks=1 00:21:57.531 --rc geninfo_unexecuted_blocks=1 00:21:57.531 00:21:57.531 ' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.531 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.790 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.790 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:57.790 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:57.790 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.790 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:57.790 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.791 17:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:04.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:04.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:04.407 Found net devices under 0000:86:00.0: cvl_0_0 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:04.407 Found net devices under 0000:86:00.1: cvl_0_1 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.407 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:22:04.408 00:22:04.408 --- 10.0.0.2 ping statistics --- 00:22:04.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.408 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:04.408 00:22:04.408 --- 10.0.0.1 ping statistics --- 00:22:04.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.408 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3540675 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3540675 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3540675 ']' 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:04.408 [2024-11-19 17:40:05.746343] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:22:04.408 [2024-11-19 17:40:05.746397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.408 [2024-11-19 17:40:05.830338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.408 [2024-11-19 17:40:05.874323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.408 [2024-11-19 17:40:05.874363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.408 [2024-11-19 17:40:05.874371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.408 [2024-11-19 17:40:05.874378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.408 [2024-11-19 17:40:05.874404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.408 [2024-11-19 17:40:05.876040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.408 [2024-11-19 17:40:05.876089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.408 [2024-11-19 17:40:05.876196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.408 [2024-11-19 17:40:05.876196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.408 17:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:04.408 17:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.408 17:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:04.408 17:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:06.942 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:06.942 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:07.199 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:07.199 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:07.458 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:07.458 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:07.458 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:07.458 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:07.458 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:07.458 [2024-11-19 17:40:09.669495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.716 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.716 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:07.716 17:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.975 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:07.975 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:08.234 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.493 [2024-11-19 17:40:10.504672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.493 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:08.752 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:08.752 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:08.752 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:08.752 17:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:10.132 Initializing NVMe Controllers 00:22:10.132 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:10.132 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:10.132 Initialization complete. Launching workers. 00:22:10.132 ======================================================== 00:22:10.132 Latency(us) 00:22:10.132 Device Information : IOPS MiB/s Average min max 00:22:10.132 PCIE (0000:5e:00.0) NSID 1 from core 0: 97565.41 381.11 327.40 20.42 7299.62 00:22:10.132 ======================================================== 00:22:10.132 Total : 97565.41 381.11 327.40 20.42 7299.62 00:22:10.132 00:22:10.132 17:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:11.069 Initializing NVMe Controllers 00:22:11.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:11.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:11.069 Initialization complete. Launching workers. 00:22:11.069 ======================================================== 00:22:11.069 Latency(us) 00:22:11.070 Device Information : IOPS MiB/s Average min max 00:22:11.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 11868.52 106.03 45663.79 00:22:11.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15827.60 6030.27 47885.23 00:22:11.070 ======================================================== 00:22:11.070 Total : 152.00 0.59 13587.59 106.03 47885.23 00:22:11.070 00:22:11.070 17:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:12.449 Initializing NVMe Controllers 00:22:12.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:12.450 Initialization complete. Launching workers. 00:22:12.450 ======================================================== 00:22:12.450 Latency(us) 00:22:12.450 Device Information : IOPS MiB/s Average min max 00:22:12.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10942.00 42.74 2924.99 323.59 6267.45 00:22:12.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3942.00 15.40 8159.31 7052.56 15821.55 00:22:12.450 ======================================================== 00:22:12.450 Total : 14884.00 58.14 4311.29 323.59 15821.55 00:22:12.450 00:22:12.450 17:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:12.450 17:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:12.450 17:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.987 Initializing NVMe Controllers 00:22:14.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.987 Controller IO queue size 128, less than required. 00:22:14.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.987 Controller IO queue size 128, less than required. 00:22:14.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.987 Initialization complete. Launching workers. 00:22:14.987 ======================================================== 00:22:14.987 Latency(us) 00:22:14.987 Device Information : IOPS MiB/s Average min max 00:22:14.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1808.37 452.09 71642.51 48134.34 120853.77 00:22:14.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.81 145.70 230824.77 103780.85 341399.46 00:22:14.987 ======================================================== 00:22:14.987 Total : 2391.18 597.80 110440.69 48134.34 341399.46 00:22:14.987 00:22:15.247 17:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:15.247 No valid NVMe controllers or AIO or URING devices found 00:22:15.247 Initializing NVMe Controllers 00:22:15.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.247 Controller IO queue size 128, less than required. 00:22:15.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.247 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:15.247 Controller IO queue size 128, less than required. 00:22:15.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.247 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:15.247 WARNING: Some requested NVMe devices were skipped 00:22:15.247 17:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:17.793 Initializing NVMe Controllers 00:22:17.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.793 Controller IO queue size 128, less than required. 00:22:17.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.793 Controller IO queue size 128, less than required. 00:22:17.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.793 Initialization complete. Launching workers. 00:22:17.793 00:22:17.793 ==================== 00:22:17.793 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:17.793 TCP transport: 00:22:17.793 polls: 10895 00:22:17.793 idle_polls: 7596 00:22:17.793 sock_completions: 3299 00:22:17.793 nvme_completions: 6365 00:22:17.793 submitted_requests: 9616 00:22:17.793 queued_requests: 1 00:22:17.793 00:22:17.793 ==================== 00:22:17.793 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:17.793 TCP transport: 00:22:17.793 polls: 14554 00:22:17.793 idle_polls: 11001 00:22:17.793 sock_completions: 3553 00:22:17.793 nvme_completions: 6569 00:22:17.793 submitted_requests: 9844 00:22:17.793 queued_requests: 1 00:22:17.793 ======================================================== 00:22:17.793 Latency(us) 00:22:17.793 Device Information : IOPS MiB/s Average min max 00:22:17.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1590.81 397.70 82468.82 48703.92 146312.41 00:22:17.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1641.81 410.45 77927.66 47773.02 111050.22 00:22:17.793 ======================================================== 00:22:17.793 Total : 3232.62 808.15 80162.42 47773.02 146312.41 00:22:17.793 00:22:17.793 17:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:17.793 17:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.052 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:18.052 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.053 rmmod nvme_tcp 00:22:18.053 rmmod nvme_fabrics 00:22:18.053 rmmod nvme_keyring 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3540675 ']' 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3540675 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3540675 ']' 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3540675 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.053 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3540675 00:22:18.311 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.311 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.312 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3540675' 00:22:18.312 killing process with pid 3540675 00:22:18.312 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3540675 00:22:18.312 17:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3540675 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.691 17:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.232 00:22:22.232 real 0m24.333s 00:22:22.232 user 1m3.366s 00:22:22.232 sys 0m8.304s 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:22.232 ************************************ 00:22:22.232 END TEST nvmf_perf 00:22:22.232 ************************************ 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.232 ************************************ 00:22:22.232 START TEST nvmf_fio_host 00:22:22.232 ************************************ 00:22:22.232 17:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:22.232 * Looking for test storage... 00:22:22.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.232 --rc genhtml_branch_coverage=1 00:22:22.232 --rc genhtml_function_coverage=1 00:22:22.232 --rc genhtml_legend=1 00:22:22.232 --rc geninfo_all_blocks=1 00:22:22.232 --rc geninfo_unexecuted_blocks=1 00:22:22.232 00:22:22.232 ' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.232 --rc genhtml_branch_coverage=1 00:22:22.232 --rc genhtml_function_coverage=1 00:22:22.232 --rc genhtml_legend=1 00:22:22.232 --rc geninfo_all_blocks=1 00:22:22.232 --rc geninfo_unexecuted_blocks=1 00:22:22.232 00:22:22.232 ' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.232 --rc genhtml_branch_coverage=1 00:22:22.232 --rc genhtml_function_coverage=1 00:22:22.232 --rc genhtml_legend=1 00:22:22.232 --rc geninfo_all_blocks=1 00:22:22.232 --rc geninfo_unexecuted_blocks=1 00:22:22.232 00:22:22.232 ' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.232 --rc genhtml_branch_coverage=1 00:22:22.232 --rc genhtml_function_coverage=1 00:22:22.232 --rc genhtml_legend=1 00:22:22.232 --rc geninfo_all_blocks=1 00:22:22.232 --rc geninfo_unexecuted_blocks=1 00:22:22.232 00:22:22.232 ' 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.232 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.233 17:40:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:28.809 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:28.809 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:28.809 Found net devices under 0000:86:00.0: cvl_0_0 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:28.809 Found net devices under 0000:86:00.1: cvl_0_1 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.809 17:40:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.809 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.809 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.809 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.809 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:22:28.809 00:22:28.809 --- 10.0.0.2 ping statistics --- 00:22:28.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.809 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:28.809 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:28.809 00:22:28.809 --- 10.0.0.1 ping statistics --- 00:22:28.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.809 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:28.809 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3546835 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3546835 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3546835 ']' 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.810 [2024-11-19 17:40:30.169150] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:22:28.810 [2024-11-19 17:40:30.169198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.810 [2024-11-19 17:40:30.249792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.810 [2024-11-19 17:40:30.293077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.810 [2024-11-19 17:40:30.293116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.810 [2024-11-19 17:40:30.293124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.810 [2024-11-19 17:40:30.293130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.810 [2024-11-19 17:40:30.293135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.810 [2024-11-19 17:40:30.294748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.810 [2024-11-19 17:40:30.294855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.810 [2024-11-19 17:40:30.294871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.810 [2024-11-19 17:40:30.294874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:28.810 [2024-11-19 17:40:30.569148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:28.810 Malloc1 00:22:28.810 17:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.069 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:29.069 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.332 [2024-11-19 17:40:31.429439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.332 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:29.657 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:29.658 17:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.965 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:29.965 fio-3.35 00:22:29.965 Starting 1 thread 00:22:32.533 00:22:32.533 test: (groupid=0, jobs=1): err= 0: pid=3547329: Tue Nov 19 17:40:34 2024 00:22:32.533 read: IOPS=11.5k, BW=44.9MiB/s (47.0MB/s)(90.0MiB/2005msec) 00:22:32.533 slat (nsec): min=1560, max=236824, avg=1719.17, stdev=2189.51 00:22:32.533 clat (usec): min=3056, max=10385, avg=6150.54, stdev=474.07 00:22:32.533 lat (usec): min=3088, max=10386, avg=6152.26, stdev=473.96 00:22:32.533 clat percentiles (usec): 00:22:32.533 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5800], 00:22:32.533 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:22:32.533 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:22:32.533 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 9110], 99.95th=[ 9765], 00:22:32.533 | 99.99th=[10290] 00:22:32.533 bw ( KiB/s): min=45008, max=46600, per=99.95%, avg=45922.00, stdev=692.37, samples=4 00:22:32.533 iops : min=11252, max=11650, avg=11480.50, stdev=173.09, samples=4 00:22:32.533 write: IOPS=11.4k, BW=44.5MiB/s (46.7MB/s)(89.3MiB/2005msec); 0 zone resets 00:22:32.533 slat (nsec): min=1600, max=223673, avg=1785.48, stdev=1640.25 00:22:32.533 clat (usec): min=2416, max=9215, avg=4975.47, stdev=386.91 00:22:32.533 lat (usec): min=2431, max=9217, avg=4977.25, stdev=386.90 00:22:32.533 clat percentiles (usec): 00:22:32.533 | 1.00th=[ 4113], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4686], 00:22:32.533 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:22:32.533 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:22:32.533 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 7767], 99.95th=[ 8717], 00:22:32.533 | 99.99th=[ 9241] 00:22:32.533 bw ( KiB/s): min=45184, max=46208, per=100.00%, avg=45616.00, stdev=440.70, samples=4 00:22:32.533 iops : min=11296, max=11552, avg=11404.00, stdev=110.18, samples=4 00:22:32.533 lat (msec) : 4=0.29%, 10=99.70%, 20=0.01% 00:22:32.533 cpu : usr=73.90%, sys=25.15%, ctx=99, majf=0, minf=3 00:22:32.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:32.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.533 issued rwts: total=23031,22865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.533 00:22:32.533 Run status group 0 (all jobs): 00:22:32.533 READ: bw=44.9MiB/s (47.0MB/s), 44.9MiB/s-44.9MiB/s (47.0MB/s-47.0MB/s), io=90.0MiB (94.3MB), run=2005-2005msec 00:22:32.533 WRITE: bw=44.5MiB/s (46.7MB/s), 44.5MiB/s-44.5MiB/s (46.7MB/s-46.7MB/s), io=89.3MiB (93.7MB), run=2005-2005msec 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:32.533 17:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.533 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:32.533 fio-3.35 00:22:32.533 Starting 1 thread 00:22:35.067 00:22:35.067 test: (groupid=0, jobs=1): err= 0: pid=3547906: Tue Nov 19 17:40:37 2024 00:22:35.067 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(336MiB/2006msec) 00:22:35.067 slat (nsec): min=2527, max=97386, avg=2868.37, stdev=1510.61 00:22:35.067 clat (usec): min=1795, max=12728, avg=6888.62, stdev=1537.41 00:22:35.067 lat (usec): min=1798, max=12731, avg=6891.49, stdev=1537.47 00:22:35.067 clat percentiles (usec): 00:22:35.067 | 1.00th=[ 3589], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5604], 00:22:35.067 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7373], 00:22:35.067 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:22:35.067 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12256], 99.95th=[12518], 00:22:35.067 | 99.99th=[12518] 00:22:35.067 bw ( KiB/s): min=79488, max=95360, per=50.10%, avg=85824.00, stdev=6955.12, samples=4 00:22:35.067 iops : min= 4968, max= 5960, avg=5364.00, stdev=434.70, samples=4 00:22:35.067 write: IOPS=6256, BW=97.8MiB/s (103MB/s)(175MiB/1791msec); 0 zone resets 00:22:35.067 slat (usec): min=27, max=298, avg=31.92, stdev= 5.92 00:22:35.067 clat (usec): min=3258, max=15355, avg=8784.36, stdev=1531.32 00:22:35.067 lat (usec): min=3289, max=15385, avg=8816.28, stdev=1531.85 00:22:35.067 clat percentiles (usec): 00:22:35.067 | 1.00th=[ 5538], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7570], 00:22:35.067 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:35.067 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11600], 00:22:35.067 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13960], 99.95th=[14877], 00:22:35.067 | 99.99th=[15139] 00:22:35.067 bw ( KiB/s): min=84352, max=99200, per=89.13%, avg=89224.00, stdev=6922.87, samples=4 00:22:35.067 iops : min= 5272, max= 6200, avg=5576.50, stdev=432.68, samples=4 00:22:35.067 lat (msec) : 2=0.04%, 4=1.74%, 10=89.43%, 20=8.79% 00:22:35.067 cpu : usr=84.64%, sys=13.67%, ctx=132, majf=0, minf=3 00:22:35.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:35.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.067 issued rwts: total=21476,11205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.067 00:22:35.067 Run status group 0 (all jobs): 00:22:35.067 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=336MiB (352MB), run=2006-2006msec 00:22:35.067 WRITE: bw=97.8MiB/s (103MB/s), 97.8MiB/s-97.8MiB/s (103MB/s-103MB/s), io=175MiB (184MB), run=1791-1791msec 00:22:35.067 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.326 rmmod nvme_tcp 00:22:35.326 rmmod nvme_fabrics 00:22:35.326 rmmod nvme_keyring 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3546835 ']' 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3546835 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3546835 ']' 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3546835 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546835 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546835' 00:22:35.326 killing process with pid 3546835 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3546835 00:22:35.326 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3546835 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.585 17:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.492 17:40:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.492 00:22:37.492 real 0m15.738s 00:22:37.492 user 0m45.790s 00:22:37.492 sys 0m6.463s 00:22:37.492 17:40:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.492 17:40:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.492 ************************************ 00:22:37.492 END TEST nvmf_fio_host 00:22:37.492 ************************************ 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.751 ************************************ 00:22:37.751 START TEST nvmf_failover 00:22:37.751 ************************************ 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:37.751 * Looking for test storage... 00:22:37.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.751 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.752 --rc genhtml_branch_coverage=1 00:22:37.752 --rc genhtml_function_coverage=1 00:22:37.752 --rc genhtml_legend=1 00:22:37.752 --rc geninfo_all_blocks=1 00:22:37.752 --rc geninfo_unexecuted_blocks=1 00:22:37.752 00:22:37.752 ' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.752 --rc genhtml_branch_coverage=1 00:22:37.752 --rc genhtml_function_coverage=1 00:22:37.752 --rc genhtml_legend=1 00:22:37.752 --rc geninfo_all_blocks=1 00:22:37.752 --rc geninfo_unexecuted_blocks=1 00:22:37.752 00:22:37.752 ' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.752 --rc genhtml_branch_coverage=1 00:22:37.752 --rc genhtml_function_coverage=1 00:22:37.752 --rc genhtml_legend=1 00:22:37.752 --rc geninfo_all_blocks=1 00:22:37.752 --rc geninfo_unexecuted_blocks=1 00:22:37.752 00:22:37.752 ' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.752 --rc genhtml_branch_coverage=1 00:22:37.752 --rc genhtml_function_coverage=1 00:22:37.752 --rc genhtml_legend=1 00:22:37.752 --rc geninfo_all_blocks=1 00:22:37.752 --rc geninfo_unexecuted_blocks=1 00:22:37.752 00:22:37.752 ' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.752 17:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:44.327 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:44.327 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:44.327 Found net devices under 0000:86:00.0: cvl_0_0 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.327 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:44.327 Found net devices under 0000:86:00.1: cvl_0_1 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:22:44.328 00:22:44.328 --- 10.0.0.2 ping statistics --- 00:22:44.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.328 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:22:44.328 00:22:44.328 --- 10.0.0.1 ping statistics --- 00:22:44.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.328 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3551782 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3551782 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3551782 ']' 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.328 17:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.328 [2024-11-19 17:40:45.957821] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:22:44.328 [2024-11-19 17:40:45.957864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.328 [2024-11-19 17:40:46.038716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:44.328 [2024-11-19 17:40:46.081772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.328 [2024-11-19 17:40:46.081811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.328 [2024-11-19 17:40:46.081819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.328 [2024-11-19 17:40:46.081826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.328 [2024-11-19 17:40:46.081832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.328 [2024-11-19 17:40:46.083332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.328 [2024-11-19 17:40:46.083351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.328 [2024-11-19 17:40:46.083359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:44.328 [2024-11-19 17:40:46.392139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.328 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:44.588 Malloc0 00:22:44.588 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.847 17:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.847 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.107 [2024-11-19 17:40:47.225548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.107 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.366 [2024-11-19 17:40:47.434112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.366 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:45.625 [2024-11-19 17:40:47.638785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3552141 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3552141 /var/tmp/bdevperf.sock 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3552141 ']' 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.625 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.885 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.885 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:45.885 17:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.144 NVMe0n1 00:22:46.144 17:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.403 00:22:46.403 17:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.404 17:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3552159 00:22:46.404 17:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:47.341 17:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.600 [2024-11-19 17:40:49.722737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 [2024-11-19 17:40:49.722934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e92d0 is same with the state(6) to be set 00:22:47.600 17:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:50.890 17:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:51.150 00:22:51.150 17:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:51.150 [2024-11-19 17:40:53.359436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.150 [2024-11-19 17:40:53.359749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea060 is same with the state(6) to be set 00:22:51.410 17:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:54.701 17:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.701 [2024-11-19 17:40:56.573380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.701 17:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:55.639 17:40:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:55.639 [2024-11-19 17:40:57.803440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.639 [2024-11-19 17:40:57.803721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 [2024-11-19 17:40:57.803802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae30 is same with the state(6) to be set 00:22:55.640 17:40:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3552159 00:23:02.218 { 00:23:02.218 "results": [ 00:23:02.218 { 00:23:02.218 "job": "NVMe0n1", 00:23:02.218 "core_mask": "0x1", 00:23:02.218 "workload": "verify", 00:23:02.218 "status": "finished", 00:23:02.218 "verify_range": { 00:23:02.218 "start": 0, 00:23:02.218 "length": 16384 00:23:02.218 }, 00:23:02.218 "queue_depth": 128, 00:23:02.218 "io_size": 4096, 00:23:02.218 "runtime": 15.011736, 00:23:02.218 "iops": 11068.14028703942, 00:23:02.218 "mibps": 43.23492299624773, 00:23:02.218 "io_failed": 4141, 00:23:02.218 "io_timeout": 0, 00:23:02.218 "avg_latency_us": 11260.940334048299, 00:23:02.218 "min_latency_us": 429.1895652173913, 00:23:02.218 "max_latency_us": 21199.471304347826 00:23:02.218 } 00:23:02.218 ], 00:23:02.218 "core_count": 1 00:23:02.218 } 00:23:02.218 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3552141 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3552141 ']' 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3552141 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3552141 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3552141' 00:23:02.219 killing process with pid 3552141 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3552141 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3552141 00:23:02.219 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.219 [2024-11-19 17:40:47.713784] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:23:02.219 [2024-11-19 17:40:47.713838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552141 ] 00:23:02.219 [2024-11-19 17:40:47.790021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.219 [2024-11-19 17:40:47.832761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.219 Running I/O for 15 seconds... 00:23:02.219 11207.00 IOPS, 43.78 MiB/s [2024-11-19T16:41:04.442Z] [2024-11-19 17:40:49.723280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.219 [2024-11-19 17:40:49.723313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.219 [2024-11-19 17:40:49.723336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.219 [2024-11-19 17:40:49.723353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.219 [2024-11-19 17:40:49.723368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.219 [2024-11-19 17:40:49.723383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.219 [2024-11-19 17:40:49.723398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.219 [2024-11-19 17:40:49.723705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.219 [2024-11-19 17:40:49.723711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.220 [2024-11-19 17:40:49.723966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.723981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.723989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.723996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.220 [2024-11-19 17:40:49.724188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.220 [2024-11-19 17:40:49.724195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.221 [2024-11-19 17:40:49.724693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.221 [2024-11-19 17:40:49.724701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.724709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.724990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.724997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.222 [2024-11-19 17:40:49.725012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.222 [2024-11-19 17:40:49.725169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.222 [2024-11-19 17:40:49.725176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.223 [2024-11-19 17:40:49.725191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.223 [2024-11-19 17:40:49.725205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.223 [2024-11-19 17:40:49.725222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.223 [2024-11-19 17:40:49.725236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.223 [2024-11-19 17:40:49.725251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.223 [2024-11-19 17:40:49.725278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.223 [2024-11-19 17:40:49.725285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100160 len:8 PRP1 0x0 PRP2 0x0 00:23:02.223 [2024-11-19 17:40:49.725292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725337] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:02.223 [2024-11-19 17:40:49.725359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.223 [2024-11-19 17:40:49.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.223 [2024-11-19 17:40:49.725382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.223 [2024-11-19 17:40:49.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.223 [2024-11-19 17:40:49.725408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:49.725415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:02.223 [2024-11-19 17:40:49.728267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:02.223 [2024-11-19 17:40:49.728297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x55f340 (9): Bad file descriptor 00:23:02.223 [2024-11-19 17:40:49.757008] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:02.223 11037.50 IOPS, 43.12 MiB/s [2024-11-19T16:41:04.446Z] 11090.33 IOPS, 43.32 MiB/s [2024-11-19T16:41:04.446Z] 11160.25 IOPS, 43.59 MiB/s [2024-11-19T16:41:04.446Z] [2024-11-19 17:40:53.360191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.223 [2024-11-19 17:40:53.360516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.223 [2024-11-19 17:40:53.360522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.224 [2024-11-19 17:40:53.360686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.224 [2024-11-19 17:40:53.360894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.224 [2024-11-19 17:40:53.360902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.360909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.360917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.360923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.360931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.360938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.360946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.360958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.360966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.360973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.360981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.360988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.360996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.225 [2024-11-19 17:40:53.361164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.225 [2024-11-19 17:40:53.361178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.225 [2024-11-19 17:40:53.361193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.225 [2024-11-19 17:40:53.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.225 [2024-11-19 17:40:53.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.225 [2024-11-19 17:40:53.361400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.225 [2024-11-19 17:40:53.361408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.226 [2024-11-19 17:40:53.361694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41872 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41880 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41888 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41896 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41904 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41912 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41920 len:8 PRP1 0x0 PRP2 0x0 00:23:02.226 [2024-11-19 17:40:53.361874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.226 [2024-11-19 17:40:53.361884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.226 [2024-11-19 17:40:53.361889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.226 [2024-11-19 17:40:53.361894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41928 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.361901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.361908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.361912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.361918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41936 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.361931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.361935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.361941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41944 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.361951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.361958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.361965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.361971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41952 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.361977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.361984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.361988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.361994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41960 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41968 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41976 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41984 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41992 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42000 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42008 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42016 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.362186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42024 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.362192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.362199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.362204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42032 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42040 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41312 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41320 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41328 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41336 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41344 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.227 [2024-11-19 17:40:53.373859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41352 len:8 PRP1 0x0 PRP2 0x0 00:23:02.227 [2024-11-19 17:40:53.373866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.227 [2024-11-19 17:40:53.373873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.227 [2024-11-19 17:40:53.373878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.228 [2024-11-19 17:40:53.373883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41360 len:8 PRP1 0x0 PRP2 0x0 00:23:02.228 [2024-11-19 17:40:53.373889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:53.373934] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:02.228 [2024-11-19 17:40:53.373960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.228 [2024-11-19 17:40:53.373968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:53.373976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.228 [2024-11-19 17:40:53.373983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:53.373990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.228 [2024-11-19 17:40:53.373997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:53.374004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.228 [2024-11-19 17:40:53.374011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:53.374018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:02.228 [2024-11-19 17:40:53.374051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x55f340 (9): Bad file descriptor 00:23:02.228 [2024-11-19 17:40:53.377052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:02.228 [2024-11-19 17:40:53.407280] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:02.228 11046.80 IOPS, 43.15 MiB/s [2024-11-19T16:41:04.451Z] 11077.00 IOPS, 43.27 MiB/s [2024-11-19T16:41:04.451Z] 11083.43 IOPS, 43.29 MiB/s [2024-11-19T16:41:04.451Z] 11100.00 IOPS, 43.36 MiB/s [2024-11-19T16:41:04.451Z] 11131.44 IOPS, 43.48 MiB/s [2024-11-19T16:41:04.451Z] [2024-11-19 17:40:57.804887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.228 [2024-11-19 17:40:57.804924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.804939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.228 [2024-11-19 17:40:57.804953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.804962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.228 [2024-11-19 17:40:57.804970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.804978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.228 [2024-11-19 17:40:57.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.804993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.228 [2024-11-19 17:40:57.804999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.228 [2024-11-19 17:40:57.805304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.228 [2024-11-19 17:40:57.805313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.229 [2024-11-19 17:40:57.805716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.229 [2024-11-19 17:40:57.805724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.805935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.805991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.805999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.230 [2024-11-19 17:40:57.806075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.230 [2024-11-19 17:40:57.806209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.230 [2024-11-19 17:40:57.806217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.231 [2024-11-19 17:40:57.806545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55384 len:8 PRP1 0x0 PRP2 0x0 00:23:02.231 [2024-11-19 17:40:57.806579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.231 [2024-11-19 17:40:57.806594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:23:02.231 [2024-11-19 17:40:57.806606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.231 [2024-11-19 17:40:57.806621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55400 len:8 PRP1 0x0 PRP2 0x0 00:23:02.231 [2024-11-19 17:40:57.806633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.231 [2024-11-19 17:40:57.806644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55408 len:8 PRP1 0x0 PRP2 0x0 00:23:02.231 [2024-11-19 17:40:57.806661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.231 [2024-11-19 17:40:57.806673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55416 len:8 PRP1 0x0 PRP2 0x0 00:23:02.231 [2024-11-19 17:40:57.806685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.231 [2024-11-19 17:40:57.806696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55424 len:8 PRP1 0x0 PRP2 0x0 00:23:02.231 [2024-11-19 17:40:57.806708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.231 [2024-11-19 17:40:57.806714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.231 [2024-11-19 17:40:57.806719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.231 [2024-11-19 17:40:57.806725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55432 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55440 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55448 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55456 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55464 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55472 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55480 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55488 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55496 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.806924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.806930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.806935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.806941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55504 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.816978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.816993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.817000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.817007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55512 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.817016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.817030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.817036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55520 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.817044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.232 [2024-11-19 17:40:57.817061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.232 [2024-11-19 17:40:57.817068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55528 len:8 PRP1 0x0 PRP2 0x0 00:23:02.232 [2024-11-19 17:40:57.817078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817126] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:02.232 [2024-11-19 17:40:57.817154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.232 [2024-11-19 17:40:57.817163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.232 [2024-11-19 17:40:57.817181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.232 [2024-11-19 17:40:57.817198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.232 [2024-11-19 17:40:57.817215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.232 [2024-11-19 17:40:57.817224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:02.232 [2024-11-19 17:40:57.817251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x55f340 (9): Bad file descriptor 00:23:02.232 [2024-11-19 17:40:57.820832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:02.232 [2024-11-19 17:40:57.844343] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:02.232 11093.60 IOPS, 43.33 MiB/s [2024-11-19T16:41:04.455Z] 11077.27 IOPS, 43.27 MiB/s [2024-11-19T16:41:04.455Z] 11081.50 IOPS, 43.29 MiB/s [2024-11-19T16:41:04.455Z] 11082.92 IOPS, 43.29 MiB/s [2024-11-19T16:41:04.455Z] 11085.00 IOPS, 43.30 MiB/s [2024-11-19T16:41:04.455Z] 11068.87 IOPS, 43.24 MiB/s 00:23:02.232 Latency(us) 00:23:02.232 [2024-11-19T16:41:04.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.232 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:02.232 Verification LBA range: start 0x0 length 0x4000 00:23:02.232 NVMe0n1 : 15.01 11068.14 43.23 275.85 0.00 11260.94 429.19 21199.47 00:23:02.232 [2024-11-19T16:41:04.455Z] =================================================================================================================== 00:23:02.232 [2024-11-19T16:41:04.455Z] Total : 11068.14 43.23 275.85 0.00 11260.94 429.19 21199.47 00:23:02.232 Received shutdown signal, test time was about 15.000000 seconds 00:23:02.232 00:23:02.232 Latency(us) 00:23:02.232 [2024-11-19T16:41:04.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.232 [2024-11-19T16:41:04.455Z] =================================================================================================================== 00:23:02.232 [2024-11-19T16:41:04.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.232 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:02.232 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:02.232 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:02.232 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3554678 00:23:02.232 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:02.232 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3554678 /var/tmp/bdevperf.sock 00:23:02.233 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3554678 ']' 00:23:02.233 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.233 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.233 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.233 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.233 17:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.233 17:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.233 17:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:02.233 17:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.233 [2024-11-19 17:41:04.344915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.233 17:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:02.492 [2024-11-19 17:41:04.557486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:02.492 17:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.751 NVMe0n1 00:23:02.751 17:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.321 00:23:03.321 17:41:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.580 00:23:03.580 17:41:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:03.581 17:41:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.840 17:41:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.099 17:41:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:07.391 17:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.391 17:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:07.391 17:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:07.391 17:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3555601 00:23:07.391 17:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3555601 00:23:08.328 { 00:23:08.328 "results": [ 00:23:08.328 { 00:23:08.328 "job": "NVMe0n1", 00:23:08.328 "core_mask": "0x1", 00:23:08.328 "workload": "verify", 00:23:08.328 "status": "finished", 00:23:08.328 "verify_range": { 00:23:08.328 "start": 0, 00:23:08.328 "length": 16384 00:23:08.328 }, 00:23:08.328 "queue_depth": 128, 00:23:08.328 "io_size": 4096, 00:23:08.328 "runtime": 1.006431, 00:23:08.328 "iops": 11089.682253428204, 00:23:08.328 "mibps": 43.31907130245392, 00:23:08.328 "io_failed": 0, 00:23:08.328 "io_timeout": 0, 00:23:08.328 "avg_latency_us": 11484.881598423079, 00:23:08.328 "min_latency_us": 1631.2765217391304, 00:23:08.328 "max_latency_us": 11055.638260869566 00:23:08.328 } 00:23:08.328 ], 00:23:08.328 "core_count": 1 00:23:08.328 } 00:23:08.328 17:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.328 [2024-11-19 17:41:03.954625] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:23:08.328 [2024-11-19 17:41:03.954677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554678 ] 00:23:08.328 [2024-11-19 17:41:04.030010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.328 [2024-11-19 17:41:04.068223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.328 [2024-11-19 17:41:06.078284] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:08.328 [2024-11-19 17:41:06.078330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-19 17:41:06.078341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-19 17:41:06.078350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-19 17:41:06.078357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-19 17:41:06.078364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-19 17:41:06.078371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-19 17:41:06.078378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-19 17:41:06.078384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-19 17:41:06.078391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:08.328 [2024-11-19 17:41:06.078415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:08.328 [2024-11-19 17:41:06.078429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1a340 (9): Bad file descriptor 00:23:08.328 [2024-11-19 17:41:06.129923] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:08.328 Running I/O for 1 seconds... 00:23:08.328 11012.00 IOPS, 43.02 MiB/s 00:23:08.328 Latency(us) 00:23:08.328 [2024-11-19T16:41:10.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.328 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:08.328 Verification LBA range: start 0x0 length 0x4000 00:23:08.329 NVMe0n1 : 1.01 11089.68 43.32 0.00 0.00 11484.88 1631.28 11055.64 00:23:08.329 [2024-11-19T16:41:10.552Z] =================================================================================================================== 00:23:08.329 [2024-11-19T16:41:10.552Z] Total : 11089.68 43.32 0.00 0.00 11484.88 1631.28 11055.64 00:23:08.329 17:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.329 17:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:08.588 17:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.847 17:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.847 17:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:08.847 17:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:09.106 17:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3554678 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3554678 ']' 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3554678 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3554678 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3554678' 00:23:12.397 killing process with pid 3554678 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3554678 00:23:12.397 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3554678 00:23:12.656 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.657 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.916 rmmod nvme_tcp 00:23:12.916 rmmod nvme_fabrics 00:23:12.916 rmmod nvme_keyring 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3551782 ']' 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3551782 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3551782 ']' 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3551782 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.916 17:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551782 00:23:12.916 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:12.916 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:12.916 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551782' 00:23:12.916 killing process with pid 3551782 00:23:12.916 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3551782 00:23:12.916 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3551782 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.176 17:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.083 00:23:15.083 real 0m37.509s 00:23:15.083 user 1m58.675s 00:23:15.083 sys 0m7.985s 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.083 ************************************ 00:23:15.083 END TEST nvmf_failover 00:23:15.083 ************************************ 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.083 17:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.343 ************************************ 00:23:15.343 START TEST nvmf_host_discovery 00:23:15.343 ************************************ 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:15.343 * Looking for test storage... 00:23:15.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:15.343 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.344 --rc genhtml_branch_coverage=1 00:23:15.344 --rc genhtml_function_coverage=1 00:23:15.344 --rc genhtml_legend=1 00:23:15.344 --rc geninfo_all_blocks=1 00:23:15.344 --rc geninfo_unexecuted_blocks=1 00:23:15.344 00:23:15.344 ' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.344 --rc genhtml_branch_coverage=1 00:23:15.344 --rc genhtml_function_coverage=1 00:23:15.344 --rc genhtml_legend=1 00:23:15.344 --rc geninfo_all_blocks=1 00:23:15.344 --rc geninfo_unexecuted_blocks=1 00:23:15.344 00:23:15.344 ' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.344 --rc genhtml_branch_coverage=1 00:23:15.344 --rc genhtml_function_coverage=1 00:23:15.344 --rc genhtml_legend=1 00:23:15.344 --rc geninfo_all_blocks=1 00:23:15.344 --rc geninfo_unexecuted_blocks=1 00:23:15.344 00:23:15.344 ' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.344 --rc genhtml_branch_coverage=1 00:23:15.344 --rc genhtml_function_coverage=1 00:23:15.344 --rc genhtml_legend=1 00:23:15.344 --rc geninfo_all_blocks=1 00:23:15.344 --rc geninfo_unexecuted_blocks=1 00:23:15.344 00:23:15.344 ' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.344 17:41:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:21.919 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:21.919 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:21.919 Found net devices under 0000:86:00.0: cvl_0_0 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.919 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:21.920 Found net devices under 0000:86:00.1: cvl_0_1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:23:21.920 00:23:21.920 --- 10.0.0.2 ping statistics --- 00:23:21.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.920 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:23:21.920 00:23:21.920 --- 10.0.0.1 ping statistics --- 00:23:21.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.920 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3560046 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3560046 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3560046 ']' 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.920 [2024-11-19 17:41:23.458852] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:23:21.920 [2024-11-19 17:41:23.458902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.920 [2024-11-19 17:41:23.537962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.920 [2024-11-19 17:41:23.579036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.920 [2024-11-19 17:41:23.579072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.920 [2024-11-19 17:41:23.579079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.920 [2024-11-19 17:41:23.579085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.920 [2024-11-19 17:41:23.579090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.920 [2024-11-19 17:41:23.579690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.920 [2024-11-19 17:41:23.715746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.920 [2024-11-19 17:41:23.727922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:21.920 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 null0 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 null1 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3560072 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3560072 /tmp/host.sock 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3560072 ']' 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:21.921 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.921 17:41:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 [2024-11-19 17:41:23.808814] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:23:21.921 [2024-11-19 17:41:23.808858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560072 ] 00:23:21.921 [2024-11-19 17:41:23.881902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.921 [2024-11-19 17:41:23.924894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.181 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 [2024-11-19 17:41:24.353511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.182 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:22.442 17:41:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:23.010 [2024-11-19 17:41:25.082443] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:23.010 [2024-11-19 17:41:25.082462] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:23.010 [2024-11-19 17:41:25.082474] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.010 [2024-11-19 17:41:25.210867] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:23.268 [2024-11-19 17:41:25.271497] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:23.268 [2024-11-19 17:41:25.272215] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x679dd0:1 started. 00:23:23.268 [2024-11-19 17:41:25.273599] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:23.268 [2024-11-19 17:41:25.273615] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.268 [2024-11-19 17:41:25.280992] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x679dd0 was disconnected and freed. delete nvme_qpair. 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:23.527 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.528 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.787 [2024-11-19 17:41:25.930679] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x67a1a0:1 started. 00:23:23.787 [2024-11-19 17:41:25.932984] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x67a1a0 was disconnected and freed. delete nvme_qpair. 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.787 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.788 17:41:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.048 [2024-11-19 17:41:26.014369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.048 [2024-11-19 17:41:26.015004] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.048 [2024-11-19 17:41:26.015023] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.048 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.049 [2024-11-19 17:41:26.141750] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:24.049 17:41:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:24.049 [2024-11-19 17:41:26.240422] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:24.049 [2024-11-19 17:41:26.240455] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:24.049 [2024-11-19 17:41:26.240463] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:24.049 [2024-11-19 17:41:26.240468] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.984 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.245 [2024-11-19 17:41:27.270395] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:25.245 [2024-11-19 17:41:27.270417] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.245 [2024-11-19 17:41:27.275082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.245 [2024-11-19 17:41:27.275100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.245 [2024-11-19 17:41:27.275109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.245 [2024-11-19 17:41:27.275116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.245 [2024-11-19 17:41:27.275123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.245 [2024-11-19 17:41:27.275130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.245 [2024-11-19 17:41:27.275137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.245 [2024-11-19 17:41:27.275144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.245 [2024-11-19 17:41:27.275150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.245 [2024-11-19 17:41:27.285095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.245 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.245 [2024-11-19 17:41:27.295130] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.245 [2024-11-19 17:41:27.295142] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.245 [2024-11-19 17:41:27.295146] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.245 [2024-11-19 17:41:27.295151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.245 [2024-11-19 17:41:27.295167] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.245 [2024-11-19 17:41:27.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.245 [2024-11-19 17:41:27.295360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.245 [2024-11-19 17:41:27.295368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.245 [2024-11-19 17:41:27.295381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.245 [2024-11-19 17:41:27.295391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.245 [2024-11-19 17:41:27.295397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.245 [2024-11-19 17:41:27.295405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.245 [2024-11-19 17:41:27.295411] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.245 [2024-11-19 17:41:27.295417] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.245 [2024-11-19 17:41:27.295421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.245 [2024-11-19 17:41:27.305199] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.245 [2024-11-19 17:41:27.305210] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.245 [2024-11-19 17:41:27.305214] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.245 [2024-11-19 17:41:27.305218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.245 [2024-11-19 17:41:27.305231] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.245 [2024-11-19 17:41:27.305343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.245 [2024-11-19 17:41:27.305354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.245 [2024-11-19 17:41:27.305361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.245 [2024-11-19 17:41:27.305371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.245 [2024-11-19 17:41:27.305384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.245 [2024-11-19 17:41:27.305390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.245 [2024-11-19 17:41:27.305397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.245 [2024-11-19 17:41:27.305402] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.245 [2024-11-19 17:41:27.305407] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.245 [2024-11-19 17:41:27.305411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.245 [2024-11-19 17:41:27.315264] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.245 [2024-11-19 17:41:27.315277] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.245 [2024-11-19 17:41:27.315281] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.245 [2024-11-19 17:41:27.315285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.245 [2024-11-19 17:41:27.315299] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.315461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.246 [2024-11-19 17:41:27.315473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.246 [2024-11-19 17:41:27.315481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.246 [2024-11-19 17:41:27.315492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.246 [2024-11-19 17:41:27.315501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.246 [2024-11-19 17:41:27.315507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.246 [2024-11-19 17:41:27.315513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.246 [2024-11-19 17:41:27.315519] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.246 [2024-11-19 17:41:27.315524] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.246 [2024-11-19 17:41:27.315527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.246 [2024-11-19 17:41:27.325330] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.246 [2024-11-19 17:41:27.325343] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.246 [2024-11-19 17:41:27.325347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.325351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.246 [2024-11-19 17:41:27.325363] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.325547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.246 [2024-11-19 17:41:27.325560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.246 [2024-11-19 17:41:27.325567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.246 [2024-11-19 17:41:27.325581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.246 [2024-11-19 17:41:27.325596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.246 [2024-11-19 17:41:27.325603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.246 [2024-11-19 17:41:27.325610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.246 [2024-11-19 17:41:27.325616] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.246 [2024-11-19 17:41:27.325622] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.246 [2024-11-19 17:41:27.325626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.246 [2024-11-19 17:41:27.335393] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.246 [2024-11-19 17:41:27.335405] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.246 [2024-11-19 17:41:27.335408] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.335413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.246 [2024-11-19 17:41:27.335425] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.335634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.246 [2024-11-19 17:41:27.335646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.246 [2024-11-19 17:41:27.335653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.246 [2024-11-19 17:41:27.335663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.246 [2024-11-19 17:41:27.336328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.246 [2024-11-19 17:41:27.336339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.246 [2024-11-19 17:41:27.336346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.246 [2024-11-19 17:41:27.336356] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.246 [2024-11-19 17:41:27.336360] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.246 [2024-11-19 17:41:27.336364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.246 [2024-11-19 17:41:27.345456] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.246 [2024-11-19 17:41:27.345467] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.246 [2024-11-19 17:41:27.345471] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.345475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.246 [2024-11-19 17:41:27.345488] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.345746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.246 [2024-11-19 17:41:27.345758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.246 [2024-11-19 17:41:27.345765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.246 [2024-11-19 17:41:27.345775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.246 [2024-11-19 17:41:27.345792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.246 [2024-11-19 17:41:27.345798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.246 [2024-11-19 17:41:27.345805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.246 [2024-11-19 17:41:27.345811] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.246 [2024-11-19 17:41:27.345815] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.246 [2024-11-19 17:41:27.345819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.246 [2024-11-19 17:41:27.355519] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.246 [2024-11-19 17:41:27.355529] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.246 [2024-11-19 17:41:27.355533] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.355537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.246 [2024-11-19 17:41:27.355549] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.246 [2024-11-19 17:41:27.355749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.246 [2024-11-19 17:41:27.355760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64a390 with addr=10.0.0.2, port=4420 00:23:25.246 [2024-11-19 17:41:27.355768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a390 is same with the state(6) to be set 00:23:25.246 [2024-11-19 17:41:27.355778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64a390 (9): Bad file descriptor 00:23:25.246 [2024-11-19 17:41:27.355793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.246 [2024-11-19 17:41:27.355799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.246 [2024-11-19 17:41:27.355812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.246 [2024-11-19 17:41:27.355817] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.246 [2024-11-19 17:41:27.355822] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.246 [2024-11-19 17:41:27.355826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.246 [2024-11-19 17:41:27.356856] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:25.246 [2024-11-19 17:41:27.356871] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:25.246 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.247 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:25.505 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.506 17:41:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.885 [2024-11-19 17:41:28.671474] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.885 [2024-11-19 17:41:28.671489] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.885 [2024-11-19 17:41:28.671500] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.885 [2024-11-19 17:41:28.757762] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:26.885 [2024-11-19 17:41:28.816371] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:26.885 [2024-11-19 17:41:28.816945] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x65b820:1 started. 00:23:26.885 [2024-11-19 17:41:28.818508] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.885 [2024-11-19 17:41:28.818532] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.885 request: 00:23:26.885 { 00:23:26.885 "name": "nvme", 00:23:26.885 "trtype": "tcp", 00:23:26.885 "traddr": "10.0.0.2", 00:23:26.885 "adrfam": "ipv4", 00:23:26.885 "trsvcid": "8009", 00:23:26.885 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:26.885 "wait_for_attach": true, 00:23:26.885 "method": "bdev_nvme_start_discovery", 00:23:26.885 "req_id": 1 00:23:26.885 } 00:23:26.885 Got JSON-RPC error response 00:23:26.885 response: 00:23:26.885 { 00:23:26.885 "code": -17, 00:23:26.885 "message": "File exists" 00:23:26.885 } 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:26.885 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.886 [2024-11-19 17:41:28.860512] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x65b820 was disconnected and freed. delete nvme_qpair. 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.886 request: 00:23:26.886 { 00:23:26.886 "name": "nvme_second", 00:23:26.886 "trtype": "tcp", 00:23:26.886 "traddr": "10.0.0.2", 00:23:26.886 "adrfam": "ipv4", 00:23:26.886 "trsvcid": "8009", 00:23:26.886 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:26.886 "wait_for_attach": true, 00:23:26.886 "method": "bdev_nvme_start_discovery", 00:23:26.886 "req_id": 1 00:23:26.886 } 00:23:26.886 Got JSON-RPC error response 00:23:26.886 response: 00:23:26.886 { 00:23:26.886 "code": -17, 00:23:26.886 "message": "File exists" 00:23:26.886 } 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:26.886 17:41:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.886 17:41:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.896 [2024-11-19 17:41:30.054213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.896 [2024-11-19 17:41:30.054246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x684fd0 with addr=10.0.0.2, port=8010 00:23:27.896 [2024-11-19 17:41:30.054265] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:27.896 [2024-11-19 17:41:30.054272] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:27.896 [2024-11-19 17:41:30.054279] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:28.878 [2024-11-19 17:41:31.056721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.878 [2024-11-19 17:41:31.056745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x684fd0 with addr=10.0.0.2, port=8010 00:23:28.878 [2024-11-19 17:41:31.056761] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:28.878 [2024-11-19 17:41:31.056767] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:28.878 [2024-11-19 17:41:31.056774] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:30.257 [2024-11-19 17:41:32.058899] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:30.257 request: 00:23:30.257 { 00:23:30.257 "name": "nvme_second", 00:23:30.257 "trtype": "tcp", 00:23:30.257 "traddr": "10.0.0.2", 00:23:30.257 "adrfam": "ipv4", 00:23:30.257 "trsvcid": "8010", 00:23:30.257 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:30.257 "wait_for_attach": false, 00:23:30.257 "attach_timeout_ms": 3000, 00:23:30.257 "method": "bdev_nvme_start_discovery", 00:23:30.257 "req_id": 1 00:23:30.257 } 00:23:30.257 Got JSON-RPC error response 00:23:30.257 response: 00:23:30.257 { 00:23:30.257 "code": -110, 00:23:30.257 "message": "Connection timed out" 00:23:30.257 } 00:23:30.257 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:30.257 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3560072 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.258 rmmod nvme_tcp 00:23:30.258 rmmod nvme_fabrics 00:23:30.258 rmmod nvme_keyring 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3560046 ']' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3560046 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3560046 ']' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3560046 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560046 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560046' 00:23:30.258 killing process with pid 3560046 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3560046 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3560046 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.258 17:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.795 00:23:32.795 real 0m17.123s 00:23:32.795 user 0m20.394s 00:23:32.795 sys 0m5.882s 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.795 ************************************ 00:23:32.795 END TEST nvmf_host_discovery 00:23:32.795 ************************************ 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.795 ************************************ 00:23:32.795 START TEST nvmf_host_multipath_status 00:23:32.795 ************************************ 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:32.795 * Looking for test storage... 00:23:32.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.795 --rc genhtml_branch_coverage=1 00:23:32.795 --rc genhtml_function_coverage=1 00:23:32.795 --rc genhtml_legend=1 00:23:32.795 --rc geninfo_all_blocks=1 00:23:32.795 --rc geninfo_unexecuted_blocks=1 00:23:32.795 00:23:32.795 ' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.795 --rc genhtml_branch_coverage=1 00:23:32.795 --rc genhtml_function_coverage=1 00:23:32.795 --rc genhtml_legend=1 00:23:32.795 --rc geninfo_all_blocks=1 00:23:32.795 --rc geninfo_unexecuted_blocks=1 00:23:32.795 00:23:32.795 ' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.795 --rc genhtml_branch_coverage=1 00:23:32.795 --rc genhtml_function_coverage=1 00:23:32.795 --rc genhtml_legend=1 00:23:32.795 --rc geninfo_all_blocks=1 00:23:32.795 --rc geninfo_unexecuted_blocks=1 00:23:32.795 00:23:32.795 ' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.795 --rc genhtml_branch_coverage=1 00:23:32.795 --rc genhtml_function_coverage=1 00:23:32.795 --rc genhtml_legend=1 00:23:32.795 --rc geninfo_all_blocks=1 00:23:32.795 --rc geninfo_unexecuted_blocks=1 00:23:32.795 00:23:32.795 ' 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.795 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.796 17:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.366 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.366 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.367 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:23:39.367 00:23:39.367 --- 10.0.0.2 ping statistics --- 00:23:39.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.367 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:23:39.367 00:23:39.367 --- 10.0.0.1 ping statistics --- 00:23:39.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.367 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3565148 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3565148 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3565148 ']' 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.367 [2024-11-19 17:41:40.702179] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:23:39.367 [2024-11-19 17:41:40.702223] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.367 [2024-11-19 17:41:40.781707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:39.367 [2024-11-19 17:41:40.823385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.367 [2024-11-19 17:41:40.823422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.367 [2024-11-19 17:41:40.823429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.367 [2024-11-19 17:41:40.823435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.367 [2024-11-19 17:41:40.823441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.367 [2024-11-19 17:41:40.824667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.367 [2024-11-19 17:41:40.824670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3565148 00:23:39.367 17:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:39.367 [2024-11-19 17:41:41.126008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.367 17:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:39.367 Malloc0 00:23:39.367 17:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:39.367 17:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.626 17:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.885 [2024-11-19 17:41:41.952366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.885 17:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.144 [2024-11-19 17:41:42.152859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3565405 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3565405 /var/tmp/bdevperf.sock 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3565405 ']' 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.144 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.403 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.403 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:40.403 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:40.662 17:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:40.920 Nvme0n1 00:23:40.920 17:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:41.487 Nvme0n1 00:23:41.487 17:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.487 17:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:43.392 17:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:43.392 17:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:43.650 17:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:43.909 17:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:44.845 17:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:44.845 17:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.845 17:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.845 17:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.104 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.104 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:45.104 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.104 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.363 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.363 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.363 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.363 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.621 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.622 17:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.880 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.880 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:45.880 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.880 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.139 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.139 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:46.139 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.397 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.656 17:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:47.592 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:47.592 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.592 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.592 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.851 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.851 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:47.851 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.851 17:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.111 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.370 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.370 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.370 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.370 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.629 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.629 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:48.629 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.629 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.887 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.887 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:48.887 17:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:49.146 17:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:49.146 17:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.523 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.783 17:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.041 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.041 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.041 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.041 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.300 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.300 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.300 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.300 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.559 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.559 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:51.559 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.818 17:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:51.818 17:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.193 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.499 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.757 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.757 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.757 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.757 17:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.016 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.016 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:54.016 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.016 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.273 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.273 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:54.273 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:54.531 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:54.790 17:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:55.728 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:55.728 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:55.728 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.728 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.987 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.987 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.987 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.987 17:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.987 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.987 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.987 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.987 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.247 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.247 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.247 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.247 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.505 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.505 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:56.505 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.505 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:56.764 17:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.022 17:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.280 17:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:58.215 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:58.215 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:58.215 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.215 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.473 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.473 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:58.473 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.473 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.731 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.731 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.731 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.731 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.989 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.989 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.989 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.989 17:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.989 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.989 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:58.989 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.989 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.248 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.248 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.248 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.248 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.508 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.508 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:59.767 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:59.768 17:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:00.026 17:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.026 17:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.401 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.660 17:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.919 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.919 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.919 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.919 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:02.178 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.178 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:02.178 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.178 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.437 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.437 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:02.437 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.697 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.697 17:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:04.074 17:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:04.074 17:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:04.074 17:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.074 17:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.074 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.074 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:04.074 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.074 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.333 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.592 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.592 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.592 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.592 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.851 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.851 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.851 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.851 17:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.110 17:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.110 17:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:05.110 17:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.110 17:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:05.368 17:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.743 17:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.002 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.002 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.002 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.002 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.258 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.258 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.258 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.258 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.516 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.516 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.516 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.516 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.775 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.775 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:07.775 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.775 17:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:08.034 17:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.409 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.668 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.668 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.668 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.668 17:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.926 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.926 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.926 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.926 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.184 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.185 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:10.185 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.185 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.443 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3565405 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3565405 ']' 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3565405 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3565405 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3565405' 00:24:10.444 killing process with pid 3565405 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3565405 00:24:10.444 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3565405 00:24:10.444 { 00:24:10.444 "results": [ 00:24:10.444 { 00:24:10.444 "job": "Nvme0n1", 00:24:10.444 "core_mask": "0x4", 00:24:10.444 "workload": "verify", 00:24:10.444 "status": "terminated", 00:24:10.444 "verify_range": { 00:24:10.444 "start": 0, 00:24:10.444 "length": 16384 00:24:10.444 }, 00:24:10.444 "queue_depth": 128, 00:24:10.444 "io_size": 4096, 00:24:10.444 "runtime": 28.892281, 00:24:10.444 "iops": 10424.237532509116, 00:24:10.444 "mibps": 40.719677861363735, 00:24:10.444 "io_failed": 0, 00:24:10.444 "io_timeout": 0, 00:24:10.444 "avg_latency_us": 12258.947902193402, 00:24:10.444 "min_latency_us": 423.8469565217391, 00:24:10.444 "max_latency_us": 3019898.88 00:24:10.444 } 00:24:10.444 ], 00:24:10.444 "core_count": 1 00:24:10.444 } 00:24:10.706 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3565405 00:24:10.706 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:10.706 [2024-11-19 17:41:42.227479] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:24:10.706 [2024-11-19 17:41:42.227535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565405 ] 00:24:10.706 [2024-11-19 17:41:42.302551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.706 [2024-11-19 17:41:42.343376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.706 Running I/O for 90 seconds... 00:24:10.706 11366.00 IOPS, 44.40 MiB/s [2024-11-19T16:42:12.929Z] 11226.00 IOPS, 43.85 MiB/s [2024-11-19T16:42:12.929Z] 11270.67 IOPS, 44.03 MiB/s [2024-11-19T16:42:12.929Z] 11312.25 IOPS, 44.19 MiB/s [2024-11-19T16:42:12.929Z] 11270.60 IOPS, 44.03 MiB/s [2024-11-19T16:42:12.929Z] 11257.67 IOPS, 43.98 MiB/s [2024-11-19T16:42:12.929Z] 11257.57 IOPS, 43.97 MiB/s [2024-11-19T16:42:12.929Z] 11252.00 IOPS, 43.95 MiB/s [2024-11-19T16:42:12.929Z] 11223.11 IOPS, 43.84 MiB/s [2024-11-19T16:42:12.929Z] 11226.00 IOPS, 43.85 MiB/s [2024-11-19T16:42:12.929Z] 11222.18 IOPS, 43.84 MiB/s [2024-11-19T16:42:12.929Z] 11224.92 IOPS, 43.85 MiB/s [2024-11-19T16:42:12.929Z] [2024-11-19 17:41:56.535695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.535938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.535953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:10.706 [2024-11-19 17:41:56.536434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.706 [2024-11-19 17:41:56.536450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.536982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.536989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:10.707 [2024-11-19 17:41:56.537212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.707 [2024-11-19 17:41:56.537218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.708 [2024-11-19 17:41:56.537938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:10.708 [2024-11-19 17:41:56.537958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.537966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.537982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.537989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.709 [2024-11-19 17:41:56.538622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.709 [2024-11-19 17:41:56.538645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:10.709 [2024-11-19 17:41:56.538686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.709 [2024-11-19 17:41:56.538693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.538979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.538988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.539013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.539039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.539064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:41:56.539091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:41:56.539262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:41:56.539269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:10.710 11155.62 IOPS, 43.58 MiB/s [2024-11-19T16:42:12.933Z] 10358.79 IOPS, 40.46 MiB/s [2024-11-19T16:42:12.933Z] 9668.20 IOPS, 37.77 MiB/s [2024-11-19T16:42:12.933Z] 9114.06 IOPS, 35.60 MiB/s [2024-11-19T16:42:12.933Z] 9238.18 IOPS, 36.09 MiB/s [2024-11-19T16:42:12.933Z] 9356.17 IOPS, 36.55 MiB/s [2024-11-19T16:42:12.933Z] 9511.63 IOPS, 37.15 MiB/s [2024-11-19T16:42:12.933Z] 9704.25 IOPS, 37.91 MiB/s [2024-11-19T16:42:12.933Z] 9883.29 IOPS, 38.61 MiB/s [2024-11-19T16:42:12.933Z] 9957.27 IOPS, 38.90 MiB/s [2024-11-19T16:42:12.933Z] 10012.91 IOPS, 39.11 MiB/s [2024-11-19T16:42:12.933Z] 10062.79 IOPS, 39.31 MiB/s [2024-11-19T16:42:12.933Z] 10184.32 IOPS, 39.78 MiB/s [2024-11-19T16:42:12.933Z] 10301.38 IOPS, 40.24 MiB/s [2024-11-19T16:42:12.933Z] [2024-11-19 17:42:10.176931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:42:10.176990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.177027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:42:10.177038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.177051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:42:10.177060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.177079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.710 [2024-11-19 17:42:10.177087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.178257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:42:10.178279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.178295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:42:10.178304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.178317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:42:10.178325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.178338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.710 [2024-11-19 17:42:10.178346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:10.710 [2024-11-19 17:42:10.178358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.178688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.178696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:10.711 [2024-11-19 17:42:10.180714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.711 [2024-11-19 17:42:10.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:10.712 [2024-11-19 17:42:10.180736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.712 [2024-11-19 17:42:10.180743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.712 [2024-11-19 17:42:10.180757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.712 [2024-11-19 17:42:10.180765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:10.712 [2024-11-19 17:42:10.180779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.712 [2024-11-19 17:42:10.180786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:10.712 [2024-11-19 17:42:10.180800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.712 [2024-11-19 17:42:10.180807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:10.712 [2024-11-19 17:42:10.180820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.712 [2024-11-19 17:42:10.180828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:10.712 [2024-11-19 17:42:10.180844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.712 [2024-11-19 17:42:10.180852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:10.712 10372.37 IOPS, 40.52 MiB/s [2024-11-19T16:42:12.935Z] 10398.29 IOPS, 40.62 MiB/s [2024-11-19T16:42:12.935Z] Received shutdown signal, test time was about 28.892929 seconds 00:24:10.712 00:24:10.712 Latency(us) 00:24:10.712 [2024-11-19T16:42:12.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.712 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.712 Verification LBA range: start 0x0 length 0x4000 00:24:10.712 Nvme0n1 : 28.89 10424.24 40.72 0.00 0.00 12258.95 423.85 3019898.88 00:24:10.712 [2024-11-19T16:42:12.935Z] =================================================================================================================== 00:24:10.712 [2024-11-19T16:42:12.935Z] Total : 10424.24 40.72 0.00 0.00 12258.95 423.85 3019898.88 00:24:10.712 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.712 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:10.712 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.971 rmmod nvme_tcp 00:24:10.971 rmmod nvme_fabrics 00:24:10.971 rmmod nvme_keyring 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3565148 ']' 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3565148 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3565148 ']' 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3565148 00:24:10.971 17:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3565148 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3565148' 00:24:10.971 killing process with pid 3565148 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3565148 00:24:10.971 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3565148 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.231 17:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.137 00:24:13.137 real 0m40.769s 00:24:13.137 user 1m50.713s 00:24:13.137 sys 0m11.516s 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.137 ************************************ 00:24:13.137 END TEST nvmf_host_multipath_status 00:24:13.137 ************************************ 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.137 17:42:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.397 ************************************ 00:24:13.397 START TEST nvmf_discovery_remove_ifc 00:24:13.397 ************************************ 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:13.397 * Looking for test storage... 00:24:13.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:13.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.397 --rc genhtml_branch_coverage=1 00:24:13.397 --rc genhtml_function_coverage=1 00:24:13.397 --rc genhtml_legend=1 00:24:13.397 --rc geninfo_all_blocks=1 00:24:13.397 --rc geninfo_unexecuted_blocks=1 00:24:13.397 00:24:13.397 ' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:13.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.397 --rc genhtml_branch_coverage=1 00:24:13.397 --rc genhtml_function_coverage=1 00:24:13.397 --rc genhtml_legend=1 00:24:13.397 --rc geninfo_all_blocks=1 00:24:13.397 --rc geninfo_unexecuted_blocks=1 00:24:13.397 00:24:13.397 ' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:13.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.397 --rc genhtml_branch_coverage=1 00:24:13.397 --rc genhtml_function_coverage=1 00:24:13.397 --rc genhtml_legend=1 00:24:13.397 --rc geninfo_all_blocks=1 00:24:13.397 --rc geninfo_unexecuted_blocks=1 00:24:13.397 00:24:13.397 ' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:13.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.397 --rc genhtml_branch_coverage=1 00:24:13.397 --rc genhtml_function_coverage=1 00:24:13.397 --rc genhtml_legend=1 00:24:13.397 --rc geninfo_all_blocks=1 00:24:13.397 --rc geninfo_unexecuted_blocks=1 00:24:13.397 00:24:13.397 ' 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.397 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.398 17:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:20.104 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:20.104 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:20.104 Found net devices under 0000:86:00.0: cvl_0_0 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:20.104 Found net devices under 0000:86:00.1: cvl_0_1 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.104 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:24:20.105 00:24:20.105 --- 10.0.0.2 ping statistics --- 00:24:20.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.105 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:20.105 00:24:20.105 --- 10.0.0.1 ping statistics --- 00:24:20.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.105 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3574559 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3574559 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3574559 ']' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.105 [2024-11-19 17:42:21.583642] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:24:20.105 [2024-11-19 17:42:21.583686] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.105 [2024-11-19 17:42:21.660724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.105 [2024-11-19 17:42:21.701912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.105 [2024-11-19 17:42:21.701955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.105 [2024-11-19 17:42:21.701963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.105 [2024-11-19 17:42:21.701969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.105 [2024-11-19 17:42:21.701974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.105 [2024-11-19 17:42:21.702542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.105 [2024-11-19 17:42:21.840785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.105 [2024-11-19 17:42:21.848955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:20.105 null0 00:24:20.105 [2024-11-19 17:42:21.880953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3574715 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3574715 /tmp/host.sock 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3574715 ']' 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:20.105 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.105 17:42:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.105 [2024-11-19 17:42:21.950835] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:24:20.106 [2024-11-19 17:42:21.950883] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574715 ] 00:24:20.106 [2024-11-19 17:42:22.024438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.106 [2024-11-19 17:42:22.067538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.106 17:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.043 [2024-11-19 17:42:23.207109] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.043 [2024-11-19 17:42:23.207129] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.043 [2024-11-19 17:42:23.207144] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.301 [2024-11-19 17:42:23.334555] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:21.301 [2024-11-19 17:42:23.517458] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:21.301 [2024-11-19 17:42:23.518173] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb809f0:1 started. 00:24:21.301 [2024-11-19 17:42:23.519541] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:21.301 [2024-11-19 17:42:23.519581] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:21.301 [2024-11-19 17:42:23.519600] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:21.301 [2024-11-19 17:42:23.519614] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:21.301 [2024-11-19 17:42:23.519632] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:21.301 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.301 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:21.560 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.561 [2024-11-19 17:42:23.526460] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb809f0 was disconnected and freed. delete nvme_qpair. 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:21.561 17:42:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.497 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.497 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.497 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.497 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.497 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.497 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.757 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.757 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.757 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:22.757 17:42:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:23.693 17:42:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.631 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.889 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:24.889 17:42:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.827 17:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.766 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.766 [2024-11-19 17:42:28.961174] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:26.766 [2024-11-19 17:42:28.961217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.766 [2024-11-19 17:42:28.961232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.766 [2024-11-19 17:42:28.961241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.766 [2024-11-19 17:42:28.961249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.766 [2024-11-19 17:42:28.961256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.766 [2024-11-19 17:42:28.961262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.766 [2024-11-19 17:42:28.961270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.766 [2024-11-19 17:42:28.961276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.766 [2024-11-19 17:42:28.961284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.767 [2024-11-19 17:42:28.961290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.767 [2024-11-19 17:42:28.961297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d220 is same with the state(6) to be set 00:24:26.767 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:26.767 17:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.767 [2024-11-19 17:42:28.971196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5d220 (9): Bad file descriptor 00:24:26.767 [2024-11-19 17:42:28.981231] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:26.767 [2024-11-19 17:42:28.981242] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:26.767 [2024-11-19 17:42:28.981247] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:26.767 [2024-11-19 17:42:28.981252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:26.767 [2024-11-19 17:42:28.981272] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.144 17:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.144 [2024-11-19 17:42:30.003960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:28.144 [2024-11-19 17:42:30.004000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5d220 with addr=10.0.0.2, port=4420 00:24:28.144 [2024-11-19 17:42:30.004015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d220 is same with the state(6) to be set 00:24:28.144 [2024-11-19 17:42:30.004039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5d220 (9): Bad file descriptor 00:24:28.144 [2024-11-19 17:42:30.004429] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:28.144 [2024-11-19 17:42:30.004455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:28.144 [2024-11-19 17:42:30.004465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:28.144 [2024-11-19 17:42:30.004476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:28.144 [2024-11-19 17:42:30.004485] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:28.144 [2024-11-19 17:42:30.004492] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:28.144 [2024-11-19 17:42:30.004498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:28.144 [2024-11-19 17:42:30.004508] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:28.144 [2024-11-19 17:42:30.004514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:28.144 17:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.144 17:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:28.144 17:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.083 [2024-11-19 17:42:31.006992] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:29.083 [2024-11-19 17:42:31.007017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:29.083 [2024-11-19 17:42:31.007031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:29.083 [2024-11-19 17:42:31.007039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:29.083 [2024-11-19 17:42:31.007047] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:29.083 [2024-11-19 17:42:31.007054] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:29.083 [2024-11-19 17:42:31.007059] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:29.083 [2024-11-19 17:42:31.007063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:29.083 [2024-11-19 17:42:31.007085] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:29.083 [2024-11-19 17:42:31.007108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.083 [2024-11-19 17:42:31.007118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.083 [2024-11-19 17:42:31.007128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.083 [2024-11-19 17:42:31.007135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.083 [2024-11-19 17:42:31.007142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.083 [2024-11-19 17:42:31.007149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.083 [2024-11-19 17:42:31.007156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.083 [2024-11-19 17:42:31.007162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.083 [2024-11-19 17:42:31.007175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.083 [2024-11-19 17:42:31.007182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.083 [2024-11-19 17:42:31.007189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:29.083 [2024-11-19 17:42:31.007283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4c900 (9): Bad file descriptor 00:24:29.083 [2024-11-19 17:42:31.008296] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:29.083 [2024-11-19 17:42:31.008307] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:29.083 17:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.021 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.281 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:30.281 17:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.849 [2024-11-19 17:42:33.062374] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:30.849 [2024-11-19 17:42:33.062392] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:30.849 [2024-11-19 17:42:33.062405] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:31.109 [2024-11-19 17:42:33.188803] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.109 [2024-11-19 17:42:33.283558] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:31.109 [2024-11-19 17:42:33.284203] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xb51760:1 started. 00:24:31.109 [2024-11-19 17:42:33.285249] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:31.109 [2024-11-19 17:42:33.285281] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:31.109 [2024-11-19 17:42:33.285298] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:31.109 [2024-11-19 17:42:33.285312] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:31.109 [2024-11-19 17:42:33.285319] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:31.109 17:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.369 [2024-11-19 17:42:33.332542] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xb51760 was disconnected and freed. delete nvme_qpair. 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:32.307 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3574715 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3574715 ']' 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3574715 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574715 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574715' 00:24:32.308 killing process with pid 3574715 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3574715 00:24:32.308 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3574715 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.567 rmmod nvme_tcp 00:24:32.567 rmmod nvme_fabrics 00:24:32.567 rmmod nvme_keyring 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3574559 ']' 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3574559 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3574559 ']' 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3574559 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574559 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574559' 00:24:32.567 killing process with pid 3574559 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3574559 00:24:32.567 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3574559 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.827 17:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.739 17:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.739 00:24:34.739 real 0m21.548s 00:24:34.739 user 0m26.747s 00:24:34.739 sys 0m6.006s 00:24:34.739 17:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.739 17:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.739 ************************************ 00:24:34.739 END TEST nvmf_discovery_remove_ifc 00:24:34.739 ************************************ 00:24:34.999 17:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:34.999 17:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.999 17:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.999 17:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.999 ************************************ 00:24:34.999 START TEST nvmf_identify_kernel_target 00:24:34.999 ************************************ 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:34.999 * Looking for test storage... 00:24:34.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:34.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.999 --rc genhtml_branch_coverage=1 00:24:34.999 --rc genhtml_function_coverage=1 00:24:34.999 --rc genhtml_legend=1 00:24:34.999 --rc geninfo_all_blocks=1 00:24:34.999 --rc geninfo_unexecuted_blocks=1 00:24:34.999 00:24:34.999 ' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:34.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.999 --rc genhtml_branch_coverage=1 00:24:34.999 --rc genhtml_function_coverage=1 00:24:34.999 --rc genhtml_legend=1 00:24:34.999 --rc geninfo_all_blocks=1 00:24:34.999 --rc geninfo_unexecuted_blocks=1 00:24:34.999 00:24:34.999 ' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:34.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.999 --rc genhtml_branch_coverage=1 00:24:34.999 --rc genhtml_function_coverage=1 00:24:34.999 --rc genhtml_legend=1 00:24:34.999 --rc geninfo_all_blocks=1 00:24:34.999 --rc geninfo_unexecuted_blocks=1 00:24:34.999 00:24:34.999 ' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:34.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.999 --rc genhtml_branch_coverage=1 00:24:34.999 --rc genhtml_function_coverage=1 00:24:34.999 --rc genhtml_legend=1 00:24:34.999 --rc geninfo_all_blocks=1 00:24:34.999 --rc geninfo_unexecuted_blocks=1 00:24:34.999 00:24:34.999 ' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.999 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.000 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.258 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.258 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.258 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.258 17:42:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.843 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.843 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.843 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:41.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:41.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:41.844 Found net devices under 0000:86:00.0: cvl_0_0 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:41.844 Found net devices under 0000:86:00.1: cvl_0_1 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.844 17:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.844 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.844 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.844 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.844 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:24:41.844 00:24:41.844 --- 10.0.0.2 ping statistics --- 00:24:41.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.844 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:41.844 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:41.844 00:24:41.844 --- 10.0.0.1 ping statistics --- 00:24:41.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.844 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:41.845 17:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:43.750 Waiting for block devices as requested 00:24:43.750 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:44.008 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:44.008 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:44.008 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:44.267 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:44.267 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:44.267 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:44.526 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:44.526 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:44.526 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:44.526 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:44.785 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:44.785 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:44.785 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:45.044 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:45.044 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:45.044 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:45.302 No valid GPT data, bailing 00:24:45.302 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:45.303 00:24:45.303 Discovery Log Number of Records 2, Generation counter 2 00:24:45.303 =====Discovery Log Entry 0====== 00:24:45.303 trtype: tcp 00:24:45.303 adrfam: ipv4 00:24:45.303 subtype: current discovery subsystem 00:24:45.303 treq: not specified, sq flow control disable supported 00:24:45.303 portid: 1 00:24:45.303 trsvcid: 4420 00:24:45.303 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:45.303 traddr: 10.0.0.1 00:24:45.303 eflags: none 00:24:45.303 sectype: none 00:24:45.303 =====Discovery Log Entry 1====== 00:24:45.303 trtype: tcp 00:24:45.303 adrfam: ipv4 00:24:45.303 subtype: nvme subsystem 00:24:45.303 treq: not specified, sq flow control disable supported 00:24:45.303 portid: 1 00:24:45.303 trsvcid: 4420 00:24:45.303 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:45.303 traddr: 10.0.0.1 00:24:45.303 eflags: none 00:24:45.303 sectype: none 00:24:45.303 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:45.303 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:45.562 ===================================================== 00:24:45.562 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:45.562 ===================================================== 00:24:45.562 Controller Capabilities/Features 00:24:45.562 ================================ 00:24:45.562 Vendor ID: 0000 00:24:45.562 Subsystem Vendor ID: 0000 00:24:45.562 Serial Number: c5c5eadf8eb624a43230 00:24:45.562 Model Number: Linux 00:24:45.562 Firmware Version: 6.8.9-20 00:24:45.562 Recommended Arb Burst: 0 00:24:45.562 IEEE OUI Identifier: 00 00 00 00:24:45.562 Multi-path I/O 00:24:45.562 May have multiple subsystem ports: No 00:24:45.562 May have multiple controllers: No 00:24:45.562 Associated with SR-IOV VF: No 00:24:45.562 Max Data Transfer Size: Unlimited 00:24:45.562 Max Number of Namespaces: 0 00:24:45.562 Max Number of I/O Queues: 1024 00:24:45.562 NVMe Specification Version (VS): 1.3 00:24:45.562 NVMe Specification Version (Identify): 1.3 00:24:45.562 Maximum Queue Entries: 1024 00:24:45.562 Contiguous Queues Required: No 00:24:45.562 Arbitration Mechanisms Supported 00:24:45.562 Weighted Round Robin: Not Supported 00:24:45.562 Vendor Specific: Not Supported 00:24:45.562 Reset Timeout: 7500 ms 00:24:45.562 Doorbell Stride: 4 bytes 00:24:45.562 NVM Subsystem Reset: Not Supported 00:24:45.562 Command Sets Supported 00:24:45.562 NVM Command Set: Supported 00:24:45.562 Boot Partition: Not Supported 00:24:45.562 Memory Page Size Minimum: 4096 bytes 00:24:45.562 Memory Page Size Maximum: 4096 bytes 00:24:45.562 Persistent Memory Region: Not Supported 00:24:45.562 Optional Asynchronous Events Supported 00:24:45.562 Namespace Attribute Notices: Not Supported 00:24:45.562 Firmware Activation Notices: Not Supported 00:24:45.562 ANA Change Notices: Not Supported 00:24:45.562 PLE Aggregate Log Change Notices: Not Supported 00:24:45.562 LBA Status Info Alert Notices: Not Supported 00:24:45.562 EGE Aggregate Log Change Notices: Not Supported 00:24:45.562 Normal NVM Subsystem Shutdown event: Not Supported 00:24:45.562 Zone Descriptor Change Notices: Not Supported 00:24:45.562 Discovery Log Change Notices: Supported 00:24:45.562 Controller Attributes 00:24:45.562 128-bit Host Identifier: Not Supported 00:24:45.562 Non-Operational Permissive Mode: Not Supported 00:24:45.562 NVM Sets: Not Supported 00:24:45.562 Read Recovery Levels: Not Supported 00:24:45.562 Endurance Groups: Not Supported 00:24:45.562 Predictable Latency Mode: Not Supported 00:24:45.562 Traffic Based Keep ALive: Not Supported 00:24:45.562 Namespace Granularity: Not Supported 00:24:45.562 SQ Associations: Not Supported 00:24:45.562 UUID List: Not Supported 00:24:45.562 Multi-Domain Subsystem: Not Supported 00:24:45.562 Fixed Capacity Management: Not Supported 00:24:45.562 Variable Capacity Management: Not Supported 00:24:45.562 Delete Endurance Group: Not Supported 00:24:45.562 Delete NVM Set: Not Supported 00:24:45.562 Extended LBA Formats Supported: Not Supported 00:24:45.562 Flexible Data Placement Supported: Not Supported 00:24:45.562 00:24:45.562 Controller Memory Buffer Support 00:24:45.562 ================================ 00:24:45.562 Supported: No 00:24:45.562 00:24:45.562 Persistent Memory Region Support 00:24:45.562 ================================ 00:24:45.562 Supported: No 00:24:45.562 00:24:45.562 Admin Command Set Attributes 00:24:45.562 ============================ 00:24:45.562 Security Send/Receive: Not Supported 00:24:45.562 Format NVM: Not Supported 00:24:45.562 Firmware Activate/Download: Not Supported 00:24:45.562 Namespace Management: Not Supported 00:24:45.562 Device Self-Test: Not Supported 00:24:45.562 Directives: Not Supported 00:24:45.562 NVMe-MI: Not Supported 00:24:45.562 Virtualization Management: Not Supported 00:24:45.562 Doorbell Buffer Config: Not Supported 00:24:45.562 Get LBA Status Capability: Not Supported 00:24:45.562 Command & Feature Lockdown Capability: Not Supported 00:24:45.563 Abort Command Limit: 1 00:24:45.563 Async Event Request Limit: 1 00:24:45.563 Number of Firmware Slots: N/A 00:24:45.563 Firmware Slot 1 Read-Only: N/A 00:24:45.563 Firmware Activation Without Reset: N/A 00:24:45.563 Multiple Update Detection Support: N/A 00:24:45.563 Firmware Update Granularity: No Information Provided 00:24:45.563 Per-Namespace SMART Log: No 00:24:45.563 Asymmetric Namespace Access Log Page: Not Supported 00:24:45.563 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:45.563 Command Effects Log Page: Not Supported 00:24:45.563 Get Log Page Extended Data: Supported 00:24:45.563 Telemetry Log Pages: Not Supported 00:24:45.563 Persistent Event Log Pages: Not Supported 00:24:45.563 Supported Log Pages Log Page: May Support 00:24:45.563 Commands Supported & Effects Log Page: Not Supported 00:24:45.563 Feature Identifiers & Effects Log Page:May Support 00:24:45.563 NVMe-MI Commands & Effects Log Page: May Support 00:24:45.563 Data Area 4 for Telemetry Log: Not Supported 00:24:45.563 Error Log Page Entries Supported: 1 00:24:45.563 Keep Alive: Not Supported 00:24:45.563 00:24:45.563 NVM Command Set Attributes 00:24:45.563 ========================== 00:24:45.563 Submission Queue Entry Size 00:24:45.563 Max: 1 00:24:45.563 Min: 1 00:24:45.563 Completion Queue Entry Size 00:24:45.563 Max: 1 00:24:45.563 Min: 1 00:24:45.563 Number of Namespaces: 0 00:24:45.563 Compare Command: Not Supported 00:24:45.563 Write Uncorrectable Command: Not Supported 00:24:45.563 Dataset Management Command: Not Supported 00:24:45.563 Write Zeroes Command: Not Supported 00:24:45.563 Set Features Save Field: Not Supported 00:24:45.563 Reservations: Not Supported 00:24:45.563 Timestamp: Not Supported 00:24:45.563 Copy: Not Supported 00:24:45.563 Volatile Write Cache: Not Present 00:24:45.563 Atomic Write Unit (Normal): 1 00:24:45.563 Atomic Write Unit (PFail): 1 00:24:45.563 Atomic Compare & Write Unit: 1 00:24:45.563 Fused Compare & Write: Not Supported 00:24:45.563 Scatter-Gather List 00:24:45.563 SGL Command Set: Supported 00:24:45.563 SGL Keyed: Not Supported 00:24:45.563 SGL Bit Bucket Descriptor: Not Supported 00:24:45.563 SGL Metadata Pointer: Not Supported 00:24:45.563 Oversized SGL: Not Supported 00:24:45.563 SGL Metadata Address: Not Supported 00:24:45.563 SGL Offset: Supported 00:24:45.563 Transport SGL Data Block: Not Supported 00:24:45.563 Replay Protected Memory Block: Not Supported 00:24:45.563 00:24:45.563 Firmware Slot Information 00:24:45.563 ========================= 00:24:45.563 Active slot: 0 00:24:45.563 00:24:45.563 00:24:45.563 Error Log 00:24:45.563 ========= 00:24:45.563 00:24:45.563 Active Namespaces 00:24:45.563 ================= 00:24:45.563 Discovery Log Page 00:24:45.563 ================== 00:24:45.563 Generation Counter: 2 00:24:45.563 Number of Records: 2 00:24:45.563 Record Format: 0 00:24:45.563 00:24:45.563 Discovery Log Entry 0 00:24:45.563 ---------------------- 00:24:45.563 Transport Type: 3 (TCP) 00:24:45.563 Address Family: 1 (IPv4) 00:24:45.563 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:45.563 Entry Flags: 00:24:45.563 Duplicate Returned Information: 0 00:24:45.563 Explicit Persistent Connection Support for Discovery: 0 00:24:45.563 Transport Requirements: 00:24:45.563 Secure Channel: Not Specified 00:24:45.563 Port ID: 1 (0x0001) 00:24:45.563 Controller ID: 65535 (0xffff) 00:24:45.563 Admin Max SQ Size: 32 00:24:45.563 Transport Service Identifier: 4420 00:24:45.563 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:45.563 Transport Address: 10.0.0.1 00:24:45.563 Discovery Log Entry 1 00:24:45.563 ---------------------- 00:24:45.563 Transport Type: 3 (TCP) 00:24:45.563 Address Family: 1 (IPv4) 00:24:45.563 Subsystem Type: 2 (NVM Subsystem) 00:24:45.563 Entry Flags: 00:24:45.563 Duplicate Returned Information: 0 00:24:45.563 Explicit Persistent Connection Support for Discovery: 0 00:24:45.563 Transport Requirements: 00:24:45.563 Secure Channel: Not Specified 00:24:45.563 Port ID: 1 (0x0001) 00:24:45.563 Controller ID: 65535 (0xffff) 00:24:45.563 Admin Max SQ Size: 32 00:24:45.563 Transport Service Identifier: 4420 00:24:45.563 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:45.563 Transport Address: 10.0.0.1 00:24:45.563 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.563 get_feature(0x01) failed 00:24:45.563 get_feature(0x02) failed 00:24:45.563 get_feature(0x04) failed 00:24:45.563 ===================================================== 00:24:45.563 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:45.563 ===================================================== 00:24:45.563 Controller Capabilities/Features 00:24:45.563 ================================ 00:24:45.563 Vendor ID: 0000 00:24:45.563 Subsystem Vendor ID: 0000 00:24:45.563 Serial Number: 0bc63c62cca825a1714d 00:24:45.563 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:45.563 Firmware Version: 6.8.9-20 00:24:45.563 Recommended Arb Burst: 6 00:24:45.563 IEEE OUI Identifier: 00 00 00 00:24:45.563 Multi-path I/O 00:24:45.563 May have multiple subsystem ports: Yes 00:24:45.563 May have multiple controllers: Yes 00:24:45.563 Associated with SR-IOV VF: No 00:24:45.563 Max Data Transfer Size: Unlimited 00:24:45.563 Max Number of Namespaces: 1024 00:24:45.563 Max Number of I/O Queues: 128 00:24:45.563 NVMe Specification Version (VS): 1.3 00:24:45.563 NVMe Specification Version (Identify): 1.3 00:24:45.563 Maximum Queue Entries: 1024 00:24:45.563 Contiguous Queues Required: No 00:24:45.563 Arbitration Mechanisms Supported 00:24:45.563 Weighted Round Robin: Not Supported 00:24:45.563 Vendor Specific: Not Supported 00:24:45.563 Reset Timeout: 7500 ms 00:24:45.563 Doorbell Stride: 4 bytes 00:24:45.563 NVM Subsystem Reset: Not Supported 00:24:45.563 Command Sets Supported 00:24:45.563 NVM Command Set: Supported 00:24:45.563 Boot Partition: Not Supported 00:24:45.563 Memory Page Size Minimum: 4096 bytes 00:24:45.563 Memory Page Size Maximum: 4096 bytes 00:24:45.563 Persistent Memory Region: Not Supported 00:24:45.563 Optional Asynchronous Events Supported 00:24:45.563 Namespace Attribute Notices: Supported 00:24:45.563 Firmware Activation Notices: Not Supported 00:24:45.563 ANA Change Notices: Supported 00:24:45.563 PLE Aggregate Log Change Notices: Not Supported 00:24:45.563 LBA Status Info Alert Notices: Not Supported 00:24:45.563 EGE Aggregate Log Change Notices: Not Supported 00:24:45.563 Normal NVM Subsystem Shutdown event: Not Supported 00:24:45.563 Zone Descriptor Change Notices: Not Supported 00:24:45.563 Discovery Log Change Notices: Not Supported 00:24:45.563 Controller Attributes 00:24:45.563 128-bit Host Identifier: Supported 00:24:45.563 Non-Operational Permissive Mode: Not Supported 00:24:45.563 NVM Sets: Not Supported 00:24:45.563 Read Recovery Levels: Not Supported 00:24:45.563 Endurance Groups: Not Supported 00:24:45.563 Predictable Latency Mode: Not Supported 00:24:45.563 Traffic Based Keep ALive: Supported 00:24:45.563 Namespace Granularity: Not Supported 00:24:45.563 SQ Associations: Not Supported 00:24:45.563 UUID List: Not Supported 00:24:45.563 Multi-Domain Subsystem: Not Supported 00:24:45.563 Fixed Capacity Management: Not Supported 00:24:45.563 Variable Capacity Management: Not Supported 00:24:45.563 Delete Endurance Group: Not Supported 00:24:45.563 Delete NVM Set: Not Supported 00:24:45.563 Extended LBA Formats Supported: Not Supported 00:24:45.563 Flexible Data Placement Supported: Not Supported 00:24:45.563 00:24:45.563 Controller Memory Buffer Support 00:24:45.563 ================================ 00:24:45.563 Supported: No 00:24:45.563 00:24:45.563 Persistent Memory Region Support 00:24:45.563 ================================ 00:24:45.563 Supported: No 00:24:45.563 00:24:45.563 Admin Command Set Attributes 00:24:45.563 ============================ 00:24:45.563 Security Send/Receive: Not Supported 00:24:45.563 Format NVM: Not Supported 00:24:45.563 Firmware Activate/Download: Not Supported 00:24:45.563 Namespace Management: Not Supported 00:24:45.563 Device Self-Test: Not Supported 00:24:45.563 Directives: Not Supported 00:24:45.563 NVMe-MI: Not Supported 00:24:45.563 Virtualization Management: Not Supported 00:24:45.563 Doorbell Buffer Config: Not Supported 00:24:45.563 Get LBA Status Capability: Not Supported 00:24:45.563 Command & Feature Lockdown Capability: Not Supported 00:24:45.563 Abort Command Limit: 4 00:24:45.563 Async Event Request Limit: 4 00:24:45.563 Number of Firmware Slots: N/A 00:24:45.563 Firmware Slot 1 Read-Only: N/A 00:24:45.563 Firmware Activation Without Reset: N/A 00:24:45.564 Multiple Update Detection Support: N/A 00:24:45.564 Firmware Update Granularity: No Information Provided 00:24:45.564 Per-Namespace SMART Log: Yes 00:24:45.564 Asymmetric Namespace Access Log Page: Supported 00:24:45.564 ANA Transition Time : 10 sec 00:24:45.564 00:24:45.564 Asymmetric Namespace Access Capabilities 00:24:45.564 ANA Optimized State : Supported 00:24:45.564 ANA Non-Optimized State : Supported 00:24:45.564 ANA Inaccessible State : Supported 00:24:45.564 ANA Persistent Loss State : Supported 00:24:45.564 ANA Change State : Supported 00:24:45.564 ANAGRPID is not changed : No 00:24:45.564 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:45.564 00:24:45.564 ANA Group Identifier Maximum : 128 00:24:45.564 Number of ANA Group Identifiers : 128 00:24:45.564 Max Number of Allowed Namespaces : 1024 00:24:45.564 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:45.564 Command Effects Log Page: Supported 00:24:45.564 Get Log Page Extended Data: Supported 00:24:45.564 Telemetry Log Pages: Not Supported 00:24:45.564 Persistent Event Log Pages: Not Supported 00:24:45.564 Supported Log Pages Log Page: May Support 00:24:45.564 Commands Supported & Effects Log Page: Not Supported 00:24:45.564 Feature Identifiers & Effects Log Page:May Support 00:24:45.564 NVMe-MI Commands & Effects Log Page: May Support 00:24:45.564 Data Area 4 for Telemetry Log: Not Supported 00:24:45.564 Error Log Page Entries Supported: 128 00:24:45.564 Keep Alive: Supported 00:24:45.564 Keep Alive Granularity: 1000 ms 00:24:45.564 00:24:45.564 NVM Command Set Attributes 00:24:45.564 ========================== 00:24:45.564 Submission Queue Entry Size 00:24:45.564 Max: 64 00:24:45.564 Min: 64 00:24:45.564 Completion Queue Entry Size 00:24:45.564 Max: 16 00:24:45.564 Min: 16 00:24:45.564 Number of Namespaces: 1024 00:24:45.564 Compare Command: Not Supported 00:24:45.564 Write Uncorrectable Command: Not Supported 00:24:45.564 Dataset Management Command: Supported 00:24:45.564 Write Zeroes Command: Supported 00:24:45.564 Set Features Save Field: Not Supported 00:24:45.564 Reservations: Not Supported 00:24:45.564 Timestamp: Not Supported 00:24:45.564 Copy: Not Supported 00:24:45.564 Volatile Write Cache: Present 00:24:45.564 Atomic Write Unit (Normal): 1 00:24:45.564 Atomic Write Unit (PFail): 1 00:24:45.564 Atomic Compare & Write Unit: 1 00:24:45.564 Fused Compare & Write: Not Supported 00:24:45.564 Scatter-Gather List 00:24:45.564 SGL Command Set: Supported 00:24:45.564 SGL Keyed: Not Supported 00:24:45.564 SGL Bit Bucket Descriptor: Not Supported 00:24:45.564 SGL Metadata Pointer: Not Supported 00:24:45.564 Oversized SGL: Not Supported 00:24:45.564 SGL Metadata Address: Not Supported 00:24:45.564 SGL Offset: Supported 00:24:45.564 Transport SGL Data Block: Not Supported 00:24:45.564 Replay Protected Memory Block: Not Supported 00:24:45.564 00:24:45.564 Firmware Slot Information 00:24:45.564 ========================= 00:24:45.564 Active slot: 0 00:24:45.564 00:24:45.564 Asymmetric Namespace Access 00:24:45.564 =========================== 00:24:45.564 Change Count : 0 00:24:45.564 Number of ANA Group Descriptors : 1 00:24:45.564 ANA Group Descriptor : 0 00:24:45.564 ANA Group ID : 1 00:24:45.564 Number of NSID Values : 1 00:24:45.564 Change Count : 0 00:24:45.564 ANA State : 1 00:24:45.564 Namespace Identifier : 1 00:24:45.564 00:24:45.564 Commands Supported and Effects 00:24:45.564 ============================== 00:24:45.564 Admin Commands 00:24:45.564 -------------- 00:24:45.564 Get Log Page (02h): Supported 00:24:45.564 Identify (06h): Supported 00:24:45.564 Abort (08h): Supported 00:24:45.564 Set Features (09h): Supported 00:24:45.564 Get Features (0Ah): Supported 00:24:45.564 Asynchronous Event Request (0Ch): Supported 00:24:45.564 Keep Alive (18h): Supported 00:24:45.564 I/O Commands 00:24:45.564 ------------ 00:24:45.564 Flush (00h): Supported 00:24:45.564 Write (01h): Supported LBA-Change 00:24:45.564 Read (02h): Supported 00:24:45.564 Write Zeroes (08h): Supported LBA-Change 00:24:45.564 Dataset Management (09h): Supported 00:24:45.564 00:24:45.564 Error Log 00:24:45.564 ========= 00:24:45.564 Entry: 0 00:24:45.564 Error Count: 0x3 00:24:45.564 Submission Queue Id: 0x0 00:24:45.564 Command Id: 0x5 00:24:45.564 Phase Bit: 0 00:24:45.564 Status Code: 0x2 00:24:45.564 Status Code Type: 0x0 00:24:45.564 Do Not Retry: 1 00:24:45.564 Error Location: 0x28 00:24:45.564 LBA: 0x0 00:24:45.564 Namespace: 0x0 00:24:45.564 Vendor Log Page: 0x0 00:24:45.564 ----------- 00:24:45.564 Entry: 1 00:24:45.564 Error Count: 0x2 00:24:45.564 Submission Queue Id: 0x0 00:24:45.564 Command Id: 0x5 00:24:45.564 Phase Bit: 0 00:24:45.564 Status Code: 0x2 00:24:45.564 Status Code Type: 0x0 00:24:45.564 Do Not Retry: 1 00:24:45.564 Error Location: 0x28 00:24:45.564 LBA: 0x0 00:24:45.564 Namespace: 0x0 00:24:45.564 Vendor Log Page: 0x0 00:24:45.564 ----------- 00:24:45.564 Entry: 2 00:24:45.564 Error Count: 0x1 00:24:45.564 Submission Queue Id: 0x0 00:24:45.564 Command Id: 0x4 00:24:45.564 Phase Bit: 0 00:24:45.564 Status Code: 0x2 00:24:45.564 Status Code Type: 0x0 00:24:45.564 Do Not Retry: 1 00:24:45.564 Error Location: 0x28 00:24:45.564 LBA: 0x0 00:24:45.564 Namespace: 0x0 00:24:45.564 Vendor Log Page: 0x0 00:24:45.564 00:24:45.564 Number of Queues 00:24:45.564 ================ 00:24:45.564 Number of I/O Submission Queues: 128 00:24:45.564 Number of I/O Completion Queues: 128 00:24:45.564 00:24:45.564 ZNS Specific Controller Data 00:24:45.564 ============================ 00:24:45.564 Zone Append Size Limit: 0 00:24:45.564 00:24:45.564 00:24:45.564 Active Namespaces 00:24:45.564 ================= 00:24:45.564 get_feature(0x05) failed 00:24:45.564 Namespace ID:1 00:24:45.564 Command Set Identifier: NVM (00h) 00:24:45.564 Deallocate: Supported 00:24:45.564 Deallocated/Unwritten Error: Not Supported 00:24:45.564 Deallocated Read Value: Unknown 00:24:45.564 Deallocate in Write Zeroes: Not Supported 00:24:45.564 Deallocated Guard Field: 0xFFFF 00:24:45.564 Flush: Supported 00:24:45.564 Reservation: Not Supported 00:24:45.564 Namespace Sharing Capabilities: Multiple Controllers 00:24:45.564 Size (in LBAs): 1953525168 (931GiB) 00:24:45.564 Capacity (in LBAs): 1953525168 (931GiB) 00:24:45.564 Utilization (in LBAs): 1953525168 (931GiB) 00:24:45.564 UUID: 3a2db500-f956-4380-8cce-a7e11f698718 00:24:45.564 Thin Provisioning: Not Supported 00:24:45.564 Per-NS Atomic Units: Yes 00:24:45.564 Atomic Boundary Size (Normal): 0 00:24:45.564 Atomic Boundary Size (PFail): 0 00:24:45.564 Atomic Boundary Offset: 0 00:24:45.564 NGUID/EUI64 Never Reused: No 00:24:45.564 ANA group ID: 1 00:24:45.564 Namespace Write Protected: No 00:24:45.564 Number of LBA Formats: 1 00:24:45.564 Current LBA Format: LBA Format #00 00:24:45.564 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:45.564 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.564 rmmod nvme_tcp 00:24:45.564 rmmod nvme_fabrics 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.564 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.565 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.565 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.565 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.565 17:42:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:48.103 17:42:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:50.641 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:50.641 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:51.579 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:51.579 00:24:51.579 real 0m16.708s 00:24:51.579 user 0m4.394s 00:24:51.579 sys 0m8.682s 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.579 ************************************ 00:24:51.579 END TEST nvmf_identify_kernel_target 00:24:51.579 ************************************ 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.579 ************************************ 00:24:51.579 START TEST nvmf_auth_host 00:24:51.579 ************************************ 00:24:51.579 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:51.839 * Looking for test storage... 00:24:51.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.839 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.840 --rc genhtml_branch_coverage=1 00:24:51.840 --rc genhtml_function_coverage=1 00:24:51.840 --rc genhtml_legend=1 00:24:51.840 --rc geninfo_all_blocks=1 00:24:51.840 --rc geninfo_unexecuted_blocks=1 00:24:51.840 00:24:51.840 ' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.840 --rc genhtml_branch_coverage=1 00:24:51.840 --rc genhtml_function_coverage=1 00:24:51.840 --rc genhtml_legend=1 00:24:51.840 --rc geninfo_all_blocks=1 00:24:51.840 --rc geninfo_unexecuted_blocks=1 00:24:51.840 00:24:51.840 ' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.840 --rc genhtml_branch_coverage=1 00:24:51.840 --rc genhtml_function_coverage=1 00:24:51.840 --rc genhtml_legend=1 00:24:51.840 --rc geninfo_all_blocks=1 00:24:51.840 --rc geninfo_unexecuted_blocks=1 00:24:51.840 00:24:51.840 ' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.840 --rc genhtml_branch_coverage=1 00:24:51.840 --rc genhtml_function_coverage=1 00:24:51.840 --rc genhtml_legend=1 00:24:51.840 --rc geninfo_all_blocks=1 00:24:51.840 --rc geninfo_unexecuted_blocks=1 00:24:51.840 00:24:51.840 ' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.840 17:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.415 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.416 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.416 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:24:58.416 00:24:58.416 --- 10.0.0.2 ping statistics --- 00:24:58.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.416 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:24:58.416 00:24:58.416 --- 10.0.0.1 ping statistics --- 00:24:58.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.416 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3586713 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3586713 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3586713 ']' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.416 17:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:58.416 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b9db83c14c362e9a5098a7495bf506d 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lPF 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b9db83c14c362e9a5098a7495bf506d 0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b9db83c14c362e9a5098a7495bf506d 0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b9db83c14c362e9a5098a7495bf506d 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lPF 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lPF 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lPF 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=10372e666437194ae19436fbcf551b803b6e3cf2be1c87836ce043236a7ea4e9 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YMZ 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 10372e666437194ae19436fbcf551b803b6e3cf2be1c87836ce043236a7ea4e9 3 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 10372e666437194ae19436fbcf551b803b6e3cf2be1c87836ce043236a7ea4e9 3 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=10372e666437194ae19436fbcf551b803b6e3cf2be1c87836ce043236a7ea4e9 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YMZ 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YMZ 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YMZ 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20165ede821653581639cdc3b07c90cc7133180efe5c1eb0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iaM 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20165ede821653581639cdc3b07c90cc7133180efe5c1eb0 0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20165ede821653581639cdc3b07c90cc7133180efe5c1eb0 0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20165ede821653581639cdc3b07c90cc7133180efe5c1eb0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iaM 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iaM 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.iaM 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eee3097593ca25917fea833876b3a6952911c0f4ece2acef 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1rl 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eee3097593ca25917fea833876b3a6952911c0f4ece2acef 2 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eee3097593ca25917fea833876b3a6952911c0f4ece2acef 2 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eee3097593ca25917fea833876b3a6952911c0f4ece2acef 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1rl 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1rl 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1rl 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ca4146e5431e3a1be5a048912edccb0 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.A7V 00:24:58.417 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ca4146e5431e3a1be5a048912edccb0 1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ca4146e5431e3a1be5a048912edccb0 1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ca4146e5431e3a1be5a048912edccb0 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.A7V 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.A7V 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.A7V 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=80e4df022853c2cabcadca612f664075 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MEY 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 80e4df022853c2cabcadca612f664075 1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 80e4df022853c2cabcadca612f664075 1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=80e4df022853c2cabcadca612f664075 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MEY 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MEY 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MEY 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=450bb85e87d523de17678074c1e00e106f65012873cb974f 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CaI 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 450bb85e87d523de17678074c1e00e106f65012873cb974f 2 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 450bb85e87d523de17678074c1e00e106f65012873cb974f 2 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=450bb85e87d523de17678074c1e00e106f65012873cb974f 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:58.418 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.677 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CaI 00:24:58.677 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CaI 00:24:58.677 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CaI 00:24:58.677 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:58.677 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2aa78f7b05f41ef79914bbf6f960c587 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zrw 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2aa78f7b05f41ef79914bbf6f960c587 0 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2aa78f7b05f41ef79914bbf6f960c587 0 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2aa78f7b05f41ef79914bbf6f960c587 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zrw 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zrw 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zrw 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=044dcac2a54d4c9aa34fee4f2fc62db59a1a0828ce8485cd19548d4e91587cb5 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8Ze 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 044dcac2a54d4c9aa34fee4f2fc62db59a1a0828ce8485cd19548d4e91587cb5 3 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 044dcac2a54d4c9aa34fee4f2fc62db59a1a0828ce8485cd19548d4e91587cb5 3 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=044dcac2a54d4c9aa34fee4f2fc62db59a1a0828ce8485cd19548d4e91587cb5 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8Ze 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8Ze 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8Ze 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3586713 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3586713 ']' 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.678 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lPF 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YMZ ]] 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YMZ 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.iaM 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1rl ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1rl 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.A7V 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MEY ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MEY 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CaI 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zrw ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zrw 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8Ze 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:58.937 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:58.938 17:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:01.472 Waiting for block devices as requested 00:25:01.732 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:01.732 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.732 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.991 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.991 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.991 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.991 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.250 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.250 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.250 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.509 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.509 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.509 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.509 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.768 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.768 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.768 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:03.336 No valid GPT data, bailing 00:25:03.336 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:03.595 00:25:03.595 Discovery Log Number of Records 2, Generation counter 2 00:25:03.595 =====Discovery Log Entry 0====== 00:25:03.595 trtype: tcp 00:25:03.595 adrfam: ipv4 00:25:03.595 subtype: current discovery subsystem 00:25:03.595 treq: not specified, sq flow control disable supported 00:25:03.595 portid: 1 00:25:03.595 trsvcid: 4420 00:25:03.595 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:03.595 traddr: 10.0.0.1 00:25:03.595 eflags: none 00:25:03.595 sectype: none 00:25:03.595 =====Discovery Log Entry 1====== 00:25:03.595 trtype: tcp 00:25:03.595 adrfam: ipv4 00:25:03.595 subtype: nvme subsystem 00:25:03.595 treq: not specified, sq flow control disable supported 00:25:03.595 portid: 1 00:25:03.595 trsvcid: 4420 00:25:03.595 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:03.595 traddr: 10.0.0.1 00:25:03.595 eflags: none 00:25:03.595 sectype: none 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.595 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.854 nvme0n1 00:25:03.854 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.854 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.855 17:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.114 nvme0n1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.114 nvme0n1 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.114 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.374 nvme0n1 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.374 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.634 nvme0n1 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.634 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.894 nvme0n1 00:25:04.894 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.894 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.894 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.894 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.894 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.894 17:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.894 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.895 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.154 nvme0n1 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.154 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.155 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.414 nvme0n1 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.414 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.673 nvme0n1 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.673 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.674 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.933 nvme0n1 00:25:05.933 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.933 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.933 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.933 17:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.933 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.192 nvme0n1 00:25:06.192 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.192 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.192 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.192 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.193 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.452 nvme0n1 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:06.452 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.453 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.712 nvme0n1 00:25:06.712 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.712 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.712 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.712 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.712 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.712 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.971 17:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.231 nvme0n1 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.231 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.490 nvme0n1 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.490 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.491 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 nvme0n1 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.777 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.144 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.144 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.144 17:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.415 nvme0n1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.416 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.675 nvme0n1 00:25:08.675 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.675 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.675 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.675 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.675 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.676 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.935 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.936 17:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.195 nvme0n1 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.195 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.764 nvme0n1 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.764 17:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.023 nvme0n1 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:10.024 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.283 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.871 nvme0n1 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.871 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.872 17:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 nvme0n1 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 17:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.009 nvme0n1 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:12.009 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.268 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 nvme0n1 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.837 17:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.406 nvme0n1 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.406 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.666 nvme0n1 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.666 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.667 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.927 nvme0n1 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.927 17:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.927 nvme0n1 00:25:13.927 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.187 nvme0n1 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.187 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.188 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.188 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.447 nvme0n1 00:25:14.447 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.448 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.708 nvme0n1 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.708 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.968 17:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.968 nvme0n1 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.968 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.228 nvme0n1 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.228 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.488 nvme0n1 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.488 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.747 nvme0n1 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:15.747 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.748 17:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.007 nvme0n1 00:25:16.007 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.007 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.007 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.007 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.007 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.266 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.525 nvme0n1 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:16.525 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.526 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.785 nvme0n1 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.785 17:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 nvme0n1 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.045 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.304 nvme0n1 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.304 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.563 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.823 nvme0n1 00:25:17.823 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.823 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.823 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.823 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.823 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.823 17:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.823 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.082 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 nvme0n1 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.912 nvme0n1 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.912 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.913 17:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.181 nvme0n1 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.181 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.442 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.701 nvme0n1 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.701 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.702 17:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.269 nvme0n1 00:25:20.269 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.269 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.269 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.269 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.269 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.269 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.528 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.529 17:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.099 nvme0n1 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.099 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.669 nvme0n1 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.669 17:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.238 nvme0n1 00:25:22.238 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.238 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.238 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.238 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.238 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.238 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.498 17:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.066 nvme0n1 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.066 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.067 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.326 nvme0n1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.326 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.586 nvme0n1 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.586 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.587 nvme0n1 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.587 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 17:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 nvme0n1 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 nvme0n1 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.369 nvme0n1 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.369 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.629 nvme0n1 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:24.629 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.630 17:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.890 nvme0n1 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.890 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.150 nvme0n1 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.150 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.151 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.411 nvme0n1 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.411 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.671 nvme0n1 00:25:25.671 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.671 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.671 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.671 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.671 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.671 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:25.930 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.931 17:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.190 nvme0n1 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:26.190 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.191 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.450 nvme0n1 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.450 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.451 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.709 nvme0n1 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.709 17:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.967 nvme0n1 00:25:26.967 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.967 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.967 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.967 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.967 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.967 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.225 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.483 nvme0n1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.483 17:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.052 nvme0n1 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.053 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.312 nvme0n1 00:25:28.312 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.312 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.312 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.312 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.312 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.312 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.572 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.572 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.572 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.572 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.572 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.573 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 nvme0n1 00:25:28.833 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.833 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.833 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.833 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.833 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 17:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.833 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 nvme0n1 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5ZGI4M2MxNGMzNjJlOWE1MDk4YTc0OTViZjUwNmT64Y9J: 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTAzNzJlNjY2NDM3MTk0YWUxOTQzNmZiY2Y1NTFiODAzYjZlM2NmMmJlMWM4NzgzNmNlMDQzMjM2YTdlYTRlOTGJLp8=: 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 17:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.972 nvme0n1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.972 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.541 nvme0n1 00:25:30.541 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.541 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.541 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.541 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.541 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.541 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.800 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.801 17:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.369 nvme0n1 00:25:31.369 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.369 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.369 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUwYmI4NWU4N2Q1MjNkZTE3Njc4MDc0YzFlMDBlMTA2ZjY1MDEyODczY2I5NzRmywugQQ==: 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhNzhmN2IwNWY0MWVmNzk5MTRiYmY2Zjk2MGM1ODcI26GI: 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.370 17:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 nvme0n1 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZGNhYzJhNTRkNGM5YWEzNGZlZTRmMmZjNjJkYjU5YTFhMDgyOGNlODQ4NWNkMTk1NDhkNGU5MTU4N2NiNYurzOM=: 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.939 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.507 nvme0n1 00:25:32.507 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.507 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.507 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.507 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.507 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.507 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.767 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.768 request: 00:25:32.768 { 00:25:32.768 "name": "nvme0", 00:25:32.768 "trtype": "tcp", 00:25:32.768 "traddr": "10.0.0.1", 00:25:32.768 "adrfam": "ipv4", 00:25:32.768 "trsvcid": "4420", 00:25:32.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:32.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:32.768 "prchk_reftag": false, 00:25:32.768 "prchk_guard": false, 00:25:32.768 "hdgst": false, 00:25:32.768 "ddgst": false, 00:25:32.768 "allow_unrecognized_csi": false, 00:25:32.768 "method": "bdev_nvme_attach_controller", 00:25:32.768 "req_id": 1 00:25:32.768 } 00:25:32.768 Got JSON-RPC error response 00:25:32.768 response: 00:25:32.768 { 00:25:32.768 "code": -5, 00:25:32.768 "message": "Input/output error" 00:25:32.768 } 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.768 request: 00:25:32.768 { 00:25:32.768 "name": "nvme0", 00:25:32.768 "trtype": "tcp", 00:25:32.768 "traddr": "10.0.0.1", 00:25:32.768 "adrfam": "ipv4", 00:25:32.768 "trsvcid": "4420", 00:25:32.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:32.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:32.768 "prchk_reftag": false, 00:25:32.768 "prchk_guard": false, 00:25:32.768 "hdgst": false, 00:25:32.768 "ddgst": false, 00:25:32.768 "dhchap_key": "key2", 00:25:32.768 "allow_unrecognized_csi": false, 00:25:32.768 "method": "bdev_nvme_attach_controller", 00:25:32.768 "req_id": 1 00:25:32.768 } 00:25:32.768 Got JSON-RPC error response 00:25:32.768 response: 00:25:32.768 { 00:25:32.768 "code": -5, 00:25:32.768 "message": "Input/output error" 00:25:32.768 } 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:32.768 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.028 17:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.028 request: 00:25:33.028 { 00:25:33.028 "name": "nvme0", 00:25:33.028 "trtype": "tcp", 00:25:33.028 "traddr": "10.0.0.1", 00:25:33.028 "adrfam": "ipv4", 00:25:33.028 "trsvcid": "4420", 00:25:33.028 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:33.028 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:33.028 "prchk_reftag": false, 00:25:33.028 "prchk_guard": false, 00:25:33.028 "hdgst": false, 00:25:33.028 "ddgst": false, 00:25:33.028 "dhchap_key": "key1", 00:25:33.028 "dhchap_ctrlr_key": "ckey2", 00:25:33.028 "allow_unrecognized_csi": false, 00:25:33.028 "method": "bdev_nvme_attach_controller", 00:25:33.028 "req_id": 1 00:25:33.028 } 00:25:33.028 Got JSON-RPC error response 00:25:33.028 response: 00:25:33.028 { 00:25:33.028 "code": -5, 00:25:33.028 "message": "Input/output error" 00:25:33.028 } 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.028 nvme0n1 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.028 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:33.029 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:33.029 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:33.029 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.029 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.029 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.288 request: 00:25:33.288 { 00:25:33.288 "name": "nvme0", 00:25:33.288 "dhchap_key": "key1", 00:25:33.288 "dhchap_ctrlr_key": "ckey2", 00:25:33.288 "method": "bdev_nvme_set_keys", 00:25:33.288 "req_id": 1 00:25:33.288 } 00:25:33.288 Got JSON-RPC error response 00:25:33.288 response: 00:25:33.288 { 00:25:33.288 "code": -13, 00:25:33.288 "message": "Permission denied" 00:25:33.288 } 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:33.288 17:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:34.225 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.225 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.225 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:34.225 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.225 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.484 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:34.484 17:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.421 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAxNjVlZGU4MjE2NTM1ODE2MzljZGMzYjA3YzkwY2M3MTMzMTgwZWZlNWMxZWIwufj/Mg==: 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: ]] 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWVlMzA5NzU5M2NhMjU5MTdmZWE4MzM4NzZiM2E2OTUyOTExYzBmNGVjZTJhY2VmvZpCuA==: 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.422 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.681 nvme0n1 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhNDE0NmU1NDMxZTNhMWJlNWEwNDg5MTJlZGNjYjC8S8xN: 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: ]] 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNGRmMDIyODUzYzJjYWJjYWRjYTYxMmY2NjQwNzXUxpzl: 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:35.681 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.682 request: 00:25:35.682 { 00:25:35.682 "name": "nvme0", 00:25:35.682 "dhchap_key": "key2", 00:25:35.682 "dhchap_ctrlr_key": "ckey1", 00:25:35.682 "method": "bdev_nvme_set_keys", 00:25:35.682 "req_id": 1 00:25:35.682 } 00:25:35.682 Got JSON-RPC error response 00:25:35.682 response: 00:25:35.682 { 00:25:35.682 "code": -13, 00:25:35.682 "message": "Permission denied" 00:25:35.682 } 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:35.682 17:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:36.619 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.619 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:36.620 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.620 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.620 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.879 rmmod nvme_tcp 00:25:36.879 rmmod nvme_fabrics 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3586713 ']' 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3586713 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3586713 ']' 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3586713 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3586713 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3586713' 00:25:36.879 killing process with pid 3586713 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3586713 00:25:36.879 17:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3586713 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:36.879 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:37.137 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.137 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.137 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.137 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.137 17:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:39.041 17:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:42.336 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:42.336 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:42.906 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:42.906 17:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lPF /tmp/spdk.key-null.iaM /tmp/spdk.key-sha256.A7V /tmp/spdk.key-sha384.CaI /tmp/spdk.key-sha512.8Ze /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:42.906 17:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.197 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:46.197 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:46.197 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:46.197 00:25:46.197 real 0m54.187s 00:25:46.197 user 0m48.899s 00:25:46.197 sys 0m12.735s 00:25:46.197 17:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.197 17:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.197 ************************************ 00:25:46.197 END TEST nvmf_auth_host 00:25:46.197 ************************************ 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.197 ************************************ 00:25:46.197 START TEST nvmf_digest 00:25:46.197 ************************************ 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:46.197 * Looking for test storage... 00:25:46.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.197 --rc genhtml_branch_coverage=1 00:25:46.197 --rc genhtml_function_coverage=1 00:25:46.197 --rc genhtml_legend=1 00:25:46.197 --rc geninfo_all_blocks=1 00:25:46.197 --rc geninfo_unexecuted_blocks=1 00:25:46.197 00:25:46.197 ' 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.197 --rc genhtml_branch_coverage=1 00:25:46.197 --rc genhtml_function_coverage=1 00:25:46.197 --rc genhtml_legend=1 00:25:46.197 --rc geninfo_all_blocks=1 00:25:46.197 --rc geninfo_unexecuted_blocks=1 00:25:46.197 00:25:46.197 ' 00:25:46.197 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:46.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.197 --rc genhtml_branch_coverage=1 00:25:46.198 --rc genhtml_function_coverage=1 00:25:46.198 --rc genhtml_legend=1 00:25:46.198 --rc geninfo_all_blocks=1 00:25:46.198 --rc geninfo_unexecuted_blocks=1 00:25:46.198 00:25:46.198 ' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:46.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.198 --rc genhtml_branch_coverage=1 00:25:46.198 --rc genhtml_function_coverage=1 00:25:46.198 --rc genhtml_legend=1 00:25:46.198 --rc geninfo_all_blocks=1 00:25:46.198 --rc geninfo_unexecuted_blocks=1 00:25:46.198 00:25:46.198 ' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.198 17:43:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:52.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:52.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:52.783 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:52.784 Found net devices under 0000:86:00.0: cvl_0_0 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:52.784 Found net devices under 0000:86:00.1: cvl_0_1 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.784 17:43:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:25:52.784 00:25:52.784 --- 10.0.0.2 ping statistics --- 00:25:52.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.784 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:52.784 00:25:52.784 --- 10.0.0.1 ping statistics --- 00:25:52.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.784 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:52.784 ************************************ 00:25:52.784 START TEST nvmf_digest_clean 00:25:52.784 ************************************ 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:52.784 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3600471 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3600471 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3600471 ']' 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.785 [2024-11-19 17:43:54.289172] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:25:52.785 [2024-11-19 17:43:54.289217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.785 [2024-11-19 17:43:54.368246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.785 [2024-11-19 17:43:54.409944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.785 [2024-11-19 17:43:54.409984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.785 [2024-11-19 17:43:54.409992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.785 [2024-11-19 17:43:54.409998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.785 [2024-11-19 17:43:54.410003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.785 [2024-11-19 17:43:54.410567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 null0 00:25:53.095 [2024-11-19 17:43:55.232924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.095 [2024-11-19 17:43:55.257123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3600574 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3600574 /var/tmp/bperf.sock 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3600574 ']' 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:53.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.095 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:53.397 [2024-11-19 17:43:55.310125] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:25:53.397 [2024-11-19 17:43:55.310168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600574 ] 00:25:53.397 [2024-11-19 17:43:55.385468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.397 [2024-11-19 17:43:55.428285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.397 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.397 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:53.397 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:53.397 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:53.397 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:53.656 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.656 17:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.915 nvme0n1 00:25:54.173 17:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:54.173 17:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:54.173 Running I/O for 2 seconds... 00:25:56.047 25885.00 IOPS, 101.11 MiB/s [2024-11-19T16:43:58.270Z] 25280.00 IOPS, 98.75 MiB/s 00:25:56.047 Latency(us) 00:25:56.047 [2024-11-19T16:43:58.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.047 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:56.047 nvme0n1 : 2.00 25304.79 98.85 0.00 0.00 5052.79 2564.45 12195.39 00:25:56.047 [2024-11-19T16:43:58.270Z] =================================================================================================================== 00:25:56.047 [2024-11-19T16:43:58.270Z] Total : 25304.79 98.85 0.00 0.00 5052.79 2564.45 12195.39 00:25:56.047 { 00:25:56.047 "results": [ 00:25:56.047 { 00:25:56.047 "job": "nvme0n1", 00:25:56.047 "core_mask": "0x2", 00:25:56.047 "workload": "randread", 00:25:56.047 "status": "finished", 00:25:56.047 "queue_depth": 128, 00:25:56.047 "io_size": 4096, 00:25:56.047 "runtime": 2.004759, 00:25:56.047 "iops": 25304.78725871788, 00:25:56.047 "mibps": 98.84682522936671, 00:25:56.047 "io_failed": 0, 00:25:56.047 "io_timeout": 0, 00:25:56.047 "avg_latency_us": 5052.788023757488, 00:25:56.047 "min_latency_us": 2564.4521739130437, 00:25:56.047 "max_latency_us": 12195.394782608695 00:25:56.047 } 00:25:56.047 ], 00:25:56.047 "core_count": 1 00:25:56.047 } 00:25:56.047 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:56.047 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:56.306 | select(.opcode=="crc32c") 00:25:56.306 | "\(.module_name) \(.executed)"' 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3600574 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3600574 ']' 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3600574 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600574 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600574' 00:25:56.306 killing process with pid 3600574 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3600574 00:25:56.306 Received shutdown signal, test time was about 2.000000 seconds 00:25:56.306 00:25:56.306 Latency(us) 00:25:56.306 [2024-11-19T16:43:58.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.306 [2024-11-19T16:43:58.529Z] =================================================================================================================== 00:25:56.306 [2024-11-19T16:43:58.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.306 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3600574 00:25:56.565 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:56.565 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:56.565 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:56.565 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3601200 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3601200 /var/tmp/bperf.sock 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3601200 ']' 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.566 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.566 [2024-11-19 17:43:58.717287] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:25:56.566 [2024-11-19 17:43:58.717335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601200 ] 00:25:56.566 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.566 Zero copy mechanism will not be used. 00:25:56.825 [2024-11-19 17:43:58.791999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.825 [2024-11-19 17:43:58.834854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.825 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.825 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:56.825 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:56.825 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:56.825 17:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:57.085 17:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.085 17:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.346 nvme0n1 00:25:57.346 17:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:57.346 17:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.346 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.346 Zero copy mechanism will not be used. 00:25:57.346 Running I/O for 2 seconds... 00:25:59.665 5849.00 IOPS, 731.12 MiB/s [2024-11-19T16:44:01.888Z] 5545.50 IOPS, 693.19 MiB/s 00:25:59.665 Latency(us) 00:25:59.665 [2024-11-19T16:44:01.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.665 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:59.665 nvme0n1 : 2.00 5544.84 693.10 0.00 0.00 2882.82 623.30 11340.58 00:25:59.665 [2024-11-19T16:44:01.888Z] =================================================================================================================== 00:25:59.665 [2024-11-19T16:44:01.888Z] Total : 5544.84 693.10 0.00 0.00 2882.82 623.30 11340.58 00:25:59.665 { 00:25:59.665 "results": [ 00:25:59.665 { 00:25:59.665 "job": "nvme0n1", 00:25:59.665 "core_mask": "0x2", 00:25:59.665 "workload": "randread", 00:25:59.665 "status": "finished", 00:25:59.665 "queue_depth": 16, 00:25:59.665 "io_size": 131072, 00:25:59.665 "runtime": 2.003485, 00:25:59.665 "iops": 5544.838119576638, 00:25:59.665 "mibps": 693.1047649470797, 00:25:59.665 "io_failed": 0, 00:25:59.665 "io_timeout": 0, 00:25:59.665 "avg_latency_us": 2882.824649657348, 00:25:59.665 "min_latency_us": 623.304347826087, 00:25:59.665 "max_latency_us": 11340.577391304349 00:25:59.665 } 00:25:59.665 ], 00:25:59.665 "core_count": 1 00:25:59.665 } 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:59.665 | select(.opcode=="crc32c") 00:25:59.665 | "\(.module_name) \(.executed)"' 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3601200 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3601200 ']' 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3601200 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3601200 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3601200' 00:25:59.665 killing process with pid 3601200 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3601200 00:25:59.665 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.665 00:25:59.665 Latency(us) 00:25:59.665 [2024-11-19T16:44:01.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.665 [2024-11-19T16:44:01.888Z] =================================================================================================================== 00:25:59.665 [2024-11-19T16:44:01.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.665 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3601200 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3601675 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3601675 /var/tmp/bperf.sock 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3601675 ']' 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.925 17:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.925 [2024-11-19 17:44:01.987178] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:25:59.925 [2024-11-19 17:44:01.987228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601675 ] 00:25:59.925 [2024-11-19 17:44:02.063825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.925 [2024-11-19 17:44:02.101412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.184 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.184 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:00.184 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:00.184 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:00.184 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:00.442 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.442 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.442 nvme0n1 00:26:00.701 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:00.701 17:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.701 Running I/O for 2 seconds... 00:26:02.573 27734.00 IOPS, 108.34 MiB/s [2024-11-19T16:44:04.796Z] 27770.00 IOPS, 108.48 MiB/s 00:26:02.573 Latency(us) 00:26:02.573 [2024-11-19T16:44:04.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.573 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.573 nvme0n1 : 2.01 27764.01 108.45 0.00 0.00 4604.35 1816.49 7693.36 00:26:02.573 [2024-11-19T16:44:04.796Z] =================================================================================================================== 00:26:02.573 [2024-11-19T16:44:04.796Z] Total : 27764.01 108.45 0.00 0.00 4604.35 1816.49 7693.36 00:26:02.573 { 00:26:02.573 "results": [ 00:26:02.573 { 00:26:02.573 "job": "nvme0n1", 00:26:02.574 "core_mask": "0x2", 00:26:02.574 "workload": "randwrite", 00:26:02.574 "status": "finished", 00:26:02.574 "queue_depth": 128, 00:26:02.574 "io_size": 4096, 00:26:02.574 "runtime": 2.005042, 00:26:02.574 "iops": 27764.00693850802, 00:26:02.574 "mibps": 108.45315210354696, 00:26:02.574 "io_failed": 0, 00:26:02.574 "io_timeout": 0, 00:26:02.574 "avg_latency_us": 4604.346063088309, 00:26:02.574 "min_latency_us": 1816.486956521739, 00:26:02.574 "max_latency_us": 7693.356521739131 00:26:02.574 } 00:26:02.574 ], 00:26:02.574 "core_count": 1 00:26:02.574 } 00:26:02.833 17:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:02.833 17:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:02.833 17:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.833 17:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.833 | select(.opcode=="crc32c") 00:26:02.833 | "\(.module_name) \(.executed)"' 00:26:02.833 17:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3601675 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3601675 ']' 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3601675 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.833 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3601675 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3601675' 00:26:03.093 killing process with pid 3601675 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3601675 00:26:03.093 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.093 00:26:03.093 Latency(us) 00:26:03.093 [2024-11-19T16:44:05.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.093 [2024-11-19T16:44:05.316Z] =================================================================================================================== 00:26:03.093 [2024-11-19T16:44:05.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3601675 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3602157 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3602157 /var/tmp/bperf.sock 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3602157 ']' 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.093 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.093 [2024-11-19 17:44:05.259378] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:03.093 [2024-11-19 17:44:05.259429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602157 ] 00:26:03.093 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.093 Zero copy mechanism will not be used. 00:26:03.352 [2024-11-19 17:44:05.335376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.352 [2024-11-19 17:44:05.372896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.352 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.352 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:03.352 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.352 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.352 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.611 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.611 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.870 nvme0n1 00:26:03.870 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:03.870 17:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.870 Zero copy mechanism will not be used. 00:26:03.870 Running I/O for 2 seconds... 00:26:06.185 6514.00 IOPS, 814.25 MiB/s [2024-11-19T16:44:08.408Z] 6670.00 IOPS, 833.75 MiB/s 00:26:06.185 Latency(us) 00:26:06.185 [2024-11-19T16:44:08.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.185 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:06.185 nvme0n1 : 2.00 6667.07 833.38 0.00 0.00 2395.73 1894.85 11169.61 00:26:06.185 [2024-11-19T16:44:08.408Z] =================================================================================================================== 00:26:06.185 [2024-11-19T16:44:08.408Z] Total : 6667.07 833.38 0.00 0.00 2395.73 1894.85 11169.61 00:26:06.185 { 00:26:06.185 "results": [ 00:26:06.185 { 00:26:06.185 "job": "nvme0n1", 00:26:06.185 "core_mask": "0x2", 00:26:06.185 "workload": "randwrite", 00:26:06.185 "status": "finished", 00:26:06.185 "queue_depth": 16, 00:26:06.185 "io_size": 131072, 00:26:06.185 "runtime": 2.003878, 00:26:06.185 "iops": 6667.072546332661, 00:26:06.185 "mibps": 833.3840682915826, 00:26:06.185 "io_failed": 0, 00:26:06.185 "io_timeout": 0, 00:26:06.185 "avg_latency_us": 2395.7307784431136, 00:26:06.185 "min_latency_us": 1894.8452173913045, 00:26:06.185 "max_latency_us": 11169.613913043479 00:26:06.185 } 00:26:06.185 ], 00:26:06.185 "core_count": 1 00:26:06.185 } 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:06.185 | select(.opcode=="crc32c") 00:26:06.185 | "\(.module_name) \(.executed)"' 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3602157 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3602157 ']' 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3602157 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602157 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602157' 00:26:06.185 killing process with pid 3602157 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3602157 00:26:06.185 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.185 00:26:06.185 Latency(us) 00:26:06.185 [2024-11-19T16:44:08.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.185 [2024-11-19T16:44:08.408Z] =================================================================================================================== 00:26:06.185 [2024-11-19T16:44:08.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.185 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3602157 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3600471 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3600471 ']' 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3600471 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600471 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600471' 00:26:06.444 killing process with pid 3600471 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3600471 00:26:06.444 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3600471 00:26:06.703 00:26:06.703 real 0m14.512s 00:26:06.703 user 0m27.415s 00:26:06.703 sys 0m4.566s 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 ************************************ 00:26:06.703 END TEST nvmf_digest_clean 00:26:06.703 ************************************ 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 ************************************ 00:26:06.703 START TEST nvmf_digest_error 00:26:06.703 ************************************ 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3602859 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3602859 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3602859 ']' 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.703 17:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 [2024-11-19 17:44:08.857880] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:06.704 [2024-11-19 17:44:08.857920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.962 [2024-11-19 17:44:08.936197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.963 [2024-11-19 17:44:08.976964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.963 [2024-11-19 17:44:08.976998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.963 [2024-11-19 17:44:08.977006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.963 [2024-11-19 17:44:08.977011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.963 [2024-11-19 17:44:08.977017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.963 [2024-11-19 17:44:08.977568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.963 [2024-11-19 17:44:09.041996] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.963 null0 00:26:06.963 [2024-11-19 17:44:09.132272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.963 [2024-11-19 17:44:09.156475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3602884 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3602884 /var/tmp/bperf.sock 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3602884 ']' 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.963 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.222 [2024-11-19 17:44:09.210174] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:07.222 [2024-11-19 17:44:09.210217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602884 ] 00:26:07.222 [2024-11-19 17:44:09.267862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.222 [2024-11-19 17:44:09.311082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.222 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.222 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:07.222 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:07.222 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:07.482 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:07.482 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.482 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.482 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.482 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.482 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.742 nvme0n1 00:26:07.742 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:07.742 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.742 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.742 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.742 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:07.742 17:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.001 Running I/O for 2 seconds... 00:26:08.001 [2024-11-19 17:44:10.034036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.001 [2024-11-19 17:44:10.034075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.034086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.043563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.043590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.043599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.057143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.057168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.057177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.065645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.065667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.065675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.077841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.077862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.077870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.090717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.090738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.090747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.103603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.103626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.103636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.114849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.114876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.114885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.124540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.124561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.124570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.134258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.134279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.134287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.144845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.144865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.144873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.154516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.154536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.154544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.163547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.163568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.163577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.173365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.173385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.173393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.183531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.183552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.183561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.193412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.193432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.193440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.202548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.202569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.202578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.002 [2024-11-19 17:44:10.213640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.002 [2024-11-19 17:44:10.213661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.002 [2024-11-19 17:44:10.213670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.262 [2024-11-19 17:44:10.222749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.222771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.222779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.232087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.232107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.244040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.244061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.244070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.256878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.256899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.256908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.268817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.268838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.268847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.277865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.277885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.277894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.291296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.291317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.291329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.302477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.302499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.302506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.312674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.312695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.312703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.323472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.323493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.323501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.335896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.335917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.335925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.348435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.348456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.348464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.357120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.357148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.370010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.370030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.370039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.382214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.382234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.382242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.393681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.393705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.393713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.401649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.401669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.412623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.412643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.412651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.421600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.421621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.421629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.433243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.433264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.433272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.441944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.441970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.441978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.452935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.452962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.452970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.465411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.465432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.465441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.263 [2024-11-19 17:44:10.473918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.263 [2024-11-19 17:44:10.473938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.263 [2024-11-19 17:44:10.473951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.485186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.485207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.485215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.497665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.497685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.497693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.506074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.506093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.506100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.517666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.517687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.517695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.529017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.529037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.529045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.539491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.539511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.539519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.549424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.549445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.549453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.558749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.558769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.558777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.568858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.568882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.568890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.578938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.578964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.578972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.588200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.588220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.588228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.596936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.596960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.596968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.606549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.606568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.606576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.615660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.615678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.615686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.626370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.626390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.626398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.635892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.635912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.635920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.644567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.644586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.644594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.654082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.654102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.654110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.665089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.665109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.665117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.675748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.675767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.675775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.685548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.685568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.685576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.524 [2024-11-19 17:44:10.695985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.524 [2024-11-19 17:44:10.696005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.524 [2024-11-19 17:44:10.696013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.525 [2024-11-19 17:44:10.706419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.525 [2024-11-19 17:44:10.706439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.525 [2024-11-19 17:44:10.706447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.525 [2024-11-19 17:44:10.714596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.525 [2024-11-19 17:44:10.714616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.525 [2024-11-19 17:44:10.714624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.525 [2024-11-19 17:44:10.726292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.525 [2024-11-19 17:44:10.726312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.525 [2024-11-19 17:44:10.726320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.525 [2024-11-19 17:44:10.734739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.525 [2024-11-19 17:44:10.734759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.525 [2024-11-19 17:44:10.734770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.747149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.747171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.747179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.758332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.758352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.758361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.766904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.766924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.766932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.778985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.779005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.779013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.791057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.791077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.799084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.799103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.799112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.811308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.811327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.811335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.821823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.821845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.821853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.830837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.830861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.830869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.839785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.839805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.839813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.850302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.850322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.850329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.860195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.860218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.860226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.869775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.869795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.869803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.877956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.877976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.877984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.888316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.888336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.900245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.900265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.900273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.909151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.786 [2024-11-19 17:44:10.909171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.786 [2024-11-19 17:44:10.909179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.786 [2024-11-19 17:44:10.920422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.920443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.920452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.930627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.930649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.930656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.939266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.939286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.939294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.950444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.950465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.950473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.959169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.959190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.959198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.970584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.970604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.970612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.982019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.982040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.982049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:10.989961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:10.989981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:10.989989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.787 [2024-11-19 17:44:11.000542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:08.787 [2024-11-19 17:44:11.000562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.787 [2024-11-19 17:44:11.000574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.046 [2024-11-19 17:44:11.011257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.046 [2024-11-19 17:44:11.011279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.046 [2024-11-19 17:44:11.011287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.046 24344.00 IOPS, 95.09 MiB/s [2024-11-19T16:44:11.269Z] [2024-11-19 17:44:11.020208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.046 [2024-11-19 17:44:11.020229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.046 [2024-11-19 17:44:11.020238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.046 [2024-11-19 17:44:11.031051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.046 [2024-11-19 17:44:11.031072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.046 [2024-11-19 17:44:11.031080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.046 [2024-11-19 17:44:11.041117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.046 [2024-11-19 17:44:11.041137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.046 [2024-11-19 17:44:11.041144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.046 [2024-11-19 17:44:11.051061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.046 [2024-11-19 17:44:11.051082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.046 [2024-11-19 17:44:11.051090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.060590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.060611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.060619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.071042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.071063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.071070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.079719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.079739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.079747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.089802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.089822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.089831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.100157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.100178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.100186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.108680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.108700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.108708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.118815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.118836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.118844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.128366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.128387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.128395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.137965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.137985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.137993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.147937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.147964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.147972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.157037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.157057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.157065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.165450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.165470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.165482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.176630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.176651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.176659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.186562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.186582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.186590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.195866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.195885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.195893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.205195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.205215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.205224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.215578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.215598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.215607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.224032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.224053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.224061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.233909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.233928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.233936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.244914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.244934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.244941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.255842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.255866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.255874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.047 [2024-11-19 17:44:11.264846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.047 [2024-11-19 17:44:11.264867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.047 [2024-11-19 17:44:11.264875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.277290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.277312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.277320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.288887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.288907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.288915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.298650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.298670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.298678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.307410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.307429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.307436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.318484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.318504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.318513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.326632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.326652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.326660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.336392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.336413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.336421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.347119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.347139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.347147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.357293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.357313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.357321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.367179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.367199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.367207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.375257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.375277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.375284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.384817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.384837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.384845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.394940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.394965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.405179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.405208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.412832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.412853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.412860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.423587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.423607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.423618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.434996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.435017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.435025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.447421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.447442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.447449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.307 [2024-11-19 17:44:11.459827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.307 [2024-11-19 17:44:11.459847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.307 [2024-11-19 17:44:11.459855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.308 [2024-11-19 17:44:11.472315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.308 [2024-11-19 17:44:11.472335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.308 [2024-11-19 17:44:11.472343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.308 [2024-11-19 17:44:11.480786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.308 [2024-11-19 17:44:11.480806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.308 [2024-11-19 17:44:11.480814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.308 [2024-11-19 17:44:11.492794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.308 [2024-11-19 17:44:11.492814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.308 [2024-11-19 17:44:11.492822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.308 [2024-11-19 17:44:11.504308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.308 [2024-11-19 17:44:11.504327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.308 [2024-11-19 17:44:11.504335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.308 [2024-11-19 17:44:11.516772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.308 [2024-11-19 17:44:11.516792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.308 [2024-11-19 17:44:11.516800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.308 [2024-11-19 17:44:11.525691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.308 [2024-11-19 17:44:11.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.308 [2024-11-19 17:44:11.525720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.568 [2024-11-19 17:44:11.536857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.568 [2024-11-19 17:44:11.536877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.536884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.546729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.546749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.546757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.555973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.555993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.556002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.564243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.564263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.564271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.576874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.576894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.576903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.587988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.588009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.588017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.599359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.599379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.599387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.607699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.607730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.618023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.618043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.618050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.628017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.628036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.628045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.636623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.636643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.646731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.646751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.646759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.656573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.656594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.656602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.665285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.665304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.665311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.675709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.675730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.675737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.684312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.684331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.684339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.695090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.695113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.695122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.705838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.705859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.705867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.716191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.716211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.716218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.725374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.725394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.725403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.737093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.737113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.737121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.747028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.747048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.747056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.756303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.756322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.756330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.765705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.765724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.765732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.776087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.776107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.776115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-11-19 17:44:11.786634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.569 [2024-11-19 17:44:11.786654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-11-19 17:44:11.786662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.796148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.796169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.796177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.804163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.804182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.815551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.815570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.815578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.826225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.826245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.826253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.834732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.834753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.834761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.846916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.846936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.846944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.855569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.855589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.855598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.867569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.867589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.867601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.879847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.879866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.879874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.892652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.892673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.892681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.905375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.905396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.905404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.916572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.916591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.916599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.924824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.924844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.924852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.937331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.937351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.937359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.948860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.948879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.830 [2024-11-19 17:44:11.948887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.830 [2024-11-19 17:44:11.961597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.830 [2024-11-19 17:44:11.961617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.831 [2024-11-19 17:44:11.961625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.831 [2024-11-19 17:44:11.973022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.831 [2024-11-19 17:44:11.973042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.831 [2024-11-19 17:44:11.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.831 [2024-11-19 17:44:11.983035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.831 [2024-11-19 17:44:11.983056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.831 [2024-11-19 17:44:11.983065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.831 [2024-11-19 17:44:11.991875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.831 [2024-11-19 17:44:11.991897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.831 [2024-11-19 17:44:11.991905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.831 [2024-11-19 17:44:12.001102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.831 [2024-11-19 17:44:12.001122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.831 [2024-11-19 17:44:12.001130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.831 [2024-11-19 17:44:12.011004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fed370) 00:26:09.831 [2024-11-19 17:44:12.011023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.831 [2024-11-19 17:44:12.011031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.831 24694.00 IOPS, 96.46 MiB/s 00:26:09.831 Latency(us) 00:26:09.831 [2024-11-19T16:44:12.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.831 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:09.831 nvme0n1 : 2.00 24687.42 96.44 0.00 0.00 5178.39 2550.21 18008.15 00:26:09.831 [2024-11-19T16:44:12.054Z] =================================================================================================================== 00:26:09.831 [2024-11-19T16:44:12.054Z] Total : 24687.42 96.44 0.00 0.00 5178.39 2550.21 18008.15 00:26:09.831 { 00:26:09.831 "results": [ 00:26:09.831 { 00:26:09.831 "job": "nvme0n1", 00:26:09.831 "core_mask": "0x2", 00:26:09.831 "workload": "randread", 00:26:09.831 "status": "finished", 00:26:09.831 "queue_depth": 128, 00:26:09.831 "io_size": 4096, 00:26:09.831 "runtime": 2.003855, 00:26:09.831 "iops": 24687.415007572905, 00:26:09.831 "mibps": 96.43521487333166, 00:26:09.831 "io_failed": 0, 00:26:09.831 "io_timeout": 0, 00:26:09.831 "avg_latency_us": 5178.393298283545, 00:26:09.831 "min_latency_us": 2550.2052173913044, 00:26:09.831 "max_latency_us": 18008.15304347826 00:26:09.831 } 00:26:09.831 ], 00:26:09.831 "core_count": 1 00:26:09.831 } 00:26:09.831 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:09.831 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:09.831 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:09.831 | .driver_specific 00:26:09.831 | .nvme_error 00:26:09.831 | .status_code 00:26:09.831 | .command_transient_transport_error' 00:26:09.831 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3602884 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3602884 ']' 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3602884 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602884 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602884' 00:26:10.091 killing process with pid 3602884 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3602884 00:26:10.091 Received shutdown signal, test time was about 2.000000 seconds 00:26:10.091 00:26:10.091 Latency(us) 00:26:10.091 [2024-11-19T16:44:12.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.091 [2024-11-19T16:44:12.314Z] =================================================================================================================== 00:26:10.091 [2024-11-19T16:44:12.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.091 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3602884 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3603396 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3603396 /var/tmp/bperf.sock 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3603396 ']' 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:10.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.350 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:10.351 [2024-11-19 17:44:12.507297] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:10.351 [2024-11-19 17:44:12.507345] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603396 ] 00:26:10.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.351 Zero copy mechanism will not be used. 00:26:10.610 [2024-11-19 17:44:12.583745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.610 [2024-11-19 17:44:12.626023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.610 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.610 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:10.610 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.610 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.869 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:10.869 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.869 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:10.869 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.869 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.869 17:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.128 nvme0n1 00:26:11.128 17:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:11.128 17:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.128 17:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.128 17:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.128 17:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:11.128 17:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:11.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.128 Zero copy mechanism will not be used. 00:26:11.128 Running I/O for 2 seconds... 00:26:11.390 [2024-11-19 17:44:13.348896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.348935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.348952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.355017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.355043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.355053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.361079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.361103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.361116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.366417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.366439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.366447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.371658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.371681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.371689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.376974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.376996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.377004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.382294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.382324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.387563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.387585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.387593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.393099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.393121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.393129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.398835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.398856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.398864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.404634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.404656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.404664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.410084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.410110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.410118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.415470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.415492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.415500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.419025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.419046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.419054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.423251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.423272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.423281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.428605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.428626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.428635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.434082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.434105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.434113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.439522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.439544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.444778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.444799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.444807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.450020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.450040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.450048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.455269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.455288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.455296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.390 [2024-11-19 17:44:13.460489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.390 [2024-11-19 17:44:13.460510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-11-19 17:44:13.460518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.465797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.465818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.465826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.471049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.471070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.471078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.476299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.476321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.476329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.481570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.481591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.481599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.486878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.486899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.486907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.492130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.492151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.492159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.497457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.497478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.497490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.502730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.502751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.502759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.507973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.507994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.508002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.513274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.513296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.513304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.518591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.518612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.518619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.523923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.523944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.523958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.529171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.529192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.529200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.534448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.534469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.534477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.539641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.539662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.539670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.544912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.544939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.544954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.550222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.550243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.550250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.555464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.555485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.555493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.560680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.560701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.560709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.565926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.565951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.565960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.571175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.571196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.571204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.576405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.576426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.576434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.581590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.581611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.581619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.586867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.586888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.586896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.591879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.591901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.591908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.597141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.597162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.391 [2024-11-19 17:44:13.597169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.391 [2024-11-19 17:44:13.602406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.391 [2024-11-19 17:44:13.602427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-11-19 17:44:13.602435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.607887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.607911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.607922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.613248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.613271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.618567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.618589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.618598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.623935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.623962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.623972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.629215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.629236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.629244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.634517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.634539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.634551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.653 [2024-11-19 17:44:13.639795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.653 [2024-11-19 17:44:13.639816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-11-19 17:44:13.639824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.645080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.645101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.645109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.650284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.650305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.650313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.655527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.655548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.655556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.660830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.660851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.666074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.666095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.666103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.671333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.671354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.671362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.676670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.676691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.676699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.682043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.682064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.682071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.687427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.687448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.687456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.692836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.692856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.692864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.698098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.698120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.698128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.703446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.703468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.703476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.708801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.708823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.708831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.714137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.714159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.714167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.719739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.719760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.719768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.725323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.725343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.725354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.730585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.730607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.730615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.736067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.736097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.741433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.741453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.741461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.747015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.747037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.747045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.752477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.752499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.752508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.758036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.758058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.758067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.763387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.763408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.763417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.768826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.768848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.768855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.774268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.774293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.774301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.779665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.779685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.779693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.654 [2024-11-19 17:44:13.785182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.654 [2024-11-19 17:44:13.785215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.654 [2024-11-19 17:44:13.785223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.790536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.790558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.790566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.796062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.796085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.796093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.801484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.801506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.801514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.807030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.807051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.807059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.812465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.812486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.812495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.818050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.818072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.818080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.823457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.823480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.823488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.828877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.828899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.828907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.834300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.834321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.834329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.839839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.839861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.839870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.845677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.845699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.845707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.851098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.851120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.851128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.857055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.857078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.857086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.862493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.862515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.862523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.655 [2024-11-19 17:44:13.867971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.655 [2024-11-19 17:44:13.867993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.655 [2024-11-19 17:44:13.868005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.873320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.873343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.873351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.878750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.878771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.878779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.884963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.884985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.884993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.890632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.890654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.890663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.896150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.896173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.896182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.901687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.901708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.901716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.907312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.907334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.907342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.912813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.912835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.912843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.918219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.918245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.918253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.916 [2024-11-19 17:44:13.923706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.916 [2024-11-19 17:44:13.923728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.916 [2024-11-19 17:44:13.923735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.929153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.929175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.929184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.934616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.934639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.934647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.940034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.940055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.940063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.945540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.945562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.945570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.951024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.951047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.951054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.956540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.956561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.956570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.961976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.962001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.962008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.967596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.967617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.967625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.973106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.973129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.973137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.978511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.978534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.978543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.983903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.983925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.983932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.989193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.989215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.989222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:13.994596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:13.994617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:13.994626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.000056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.000078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.000086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.005507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.005529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.005537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.010814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.010836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.010848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.016150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.016172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.016180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.021461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.021483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.021492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.026734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.026756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.026764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.032009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.032032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.032040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.037267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.037289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.037298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.042465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.042487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.042496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.047793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.917 [2024-11-19 17:44:14.047814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.917 [2024-11-19 17:44:14.047822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.917 [2024-11-19 17:44:14.053191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.053213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.053221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.058666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.058692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.058701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.064145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.064167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.064175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.069588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.069610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.069619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.075005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.075026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.075034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.080256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.080278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.080286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.085439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.085461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.085469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.090774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.090796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.090804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.096003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.096026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.096033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.101481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.101502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.101510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.106969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.106990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.106998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.112629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.112650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.112658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.118051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.118073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.118081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.123751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.123774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.123782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.918 [2024-11-19 17:44:14.129139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:11.918 [2024-11-19 17:44:14.129164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.918 [2024-11-19 17:44:14.129172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.134627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.134654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.134662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.140007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.140046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.140055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.145430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.145452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.145460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.150750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.150773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.150785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.156216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.156238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.156246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.161625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.161647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.161655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.167034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.167056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.167064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.172331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.172354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.172362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.177701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.177723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.177732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.183063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.183085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.183093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.188441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.188464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.188473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.193922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.193944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.193959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.199333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.199356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.199364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.204859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.204882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.204891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.210480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.210503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.210511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.216010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.179 [2024-11-19 17:44:14.216033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.179 [2024-11-19 17:44:14.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.179 [2024-11-19 17:44:14.221324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.221354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.226765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.226787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.226795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.232220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.232243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.232252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.237632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.237654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.237662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.242943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.242971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.243002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.249400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.249423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.249432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.257493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.257516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.257525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.264923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.264952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.264962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.272267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.272290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.272298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.279954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.279977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.279986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.287535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.287558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.287567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.295650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.295672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.295681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.303474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.303498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.303506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.311690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.311717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.311726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.319622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.319646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.319655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.328031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.328054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.328063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.335609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.335632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.335641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.180 5546.00 IOPS, 693.25 MiB/s [2024-11-19T16:44:14.403Z] [2024-11-19 17:44:14.344021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.344045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.344054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.352080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.352102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.359576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.359599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.359608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.366269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.366292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.366302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.373730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.373754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.373764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.380426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.380450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.380459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.387155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.387178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.387187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.180 [2024-11-19 17:44:14.395209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.180 [2024-11-19 17:44:14.395233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.180 [2024-11-19 17:44:14.395242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.402455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.402479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.402488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.409088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.409113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.409122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.414607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.414630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.414638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.419941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.419976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.425748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.425770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.425779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.432819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.432842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.432855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.440117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.440141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.440150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.445680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.445703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.445711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.449280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.449303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.449311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.456300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.456323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.456331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.462618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.462641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.462650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.469191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.469214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.469223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.475094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.475117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.475126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.480432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.480454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.480463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.485885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.485911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.485920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.491359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.491381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.491390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.496814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.496836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.496845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.502140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.502162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.502170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.507527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.507550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.507558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.513011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.513033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.513041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.518560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.518590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.524111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.524133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.524142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.529669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.529690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.529699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.535129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.535151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.442 [2024-11-19 17:44:14.535159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.442 [2024-11-19 17:44:14.540461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.442 [2024-11-19 17:44:14.540483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.540490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.545805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.545827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.545835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.551118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.551139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.551147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.556499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.556520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.556527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.561871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.561893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.561901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.567103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.567125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.567134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.572794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.572816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.578314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.578336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.578348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.583778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.583800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.583809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.589157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.589179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.589187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.594574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.594596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.594604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.600070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.600092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.600100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.605577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.605599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.605607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.611036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.611058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.611066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.616387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.616410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.616418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.621718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.621740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.621748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.627160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.627186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.627195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.632643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.632666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.632674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.638004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.638026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.638034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.643410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.643432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.643440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.648797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.648819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.648827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.654194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.654225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.654233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.443 [2024-11-19 17:44:14.659600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.443 [2024-11-19 17:44:14.659622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.443 [2024-11-19 17:44:14.659630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.704 [2024-11-19 17:44:14.665013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.704 [2024-11-19 17:44:14.665036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.704 [2024-11-19 17:44:14.665045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.704 [2024-11-19 17:44:14.670504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.704 [2024-11-19 17:44:14.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.704 [2024-11-19 17:44:14.670535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.704 [2024-11-19 17:44:14.676094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.704 [2024-11-19 17:44:14.676117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.704 [2024-11-19 17:44:14.676138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.704 [2024-11-19 17:44:14.681349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.704 [2024-11-19 17:44:14.681371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.704 [2024-11-19 17:44:14.681379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.704 [2024-11-19 17:44:14.686560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.704 [2024-11-19 17:44:14.686581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.704 [2024-11-19 17:44:14.686590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.704 [2024-11-19 17:44:14.691923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.704 [2024-11-19 17:44:14.691946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.704 [2024-11-19 17:44:14.691960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.696943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.696972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.696980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.702198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.702220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.702229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.707462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.707485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.707494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.712672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.712694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.718169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.718192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.718204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.723417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.723440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.723449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.728646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.728670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.728677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.733878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.733901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.733910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.739074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.739096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.739104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.744420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.744443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.744451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.749979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.750001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.750009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.755523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.755545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.755554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.760844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.760865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.760874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.766111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.766133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.766141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.771325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.771347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.771355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.776579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.776601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.776609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.781804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.781825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.781833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.787071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.787093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.787101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.792261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.792283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.792290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.797453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.797475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.797483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.802688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.802709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.802717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.807833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.807854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.807866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.813013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.813035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.813043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.818262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.818283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.818291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.823460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.823482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.823490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.828662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.828683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.705 [2024-11-19 17:44:14.828691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.705 [2024-11-19 17:44:14.833846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.705 [2024-11-19 17:44:14.833868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.833876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.838997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.839019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.839027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.844215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.844238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.844246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.849446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.849467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.849475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.854654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.854679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.854688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.859854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.859876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.859884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.865138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.865160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.865168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.870596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.870618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.870627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.875956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.875978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.875987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.881290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.881312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.881320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.886513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.886542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.892109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.892131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.892140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.898154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.898177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.898185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.903404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.903427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.903435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.908620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.908642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.908651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.913856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.913877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.913886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.706 [2024-11-19 17:44:14.919053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.706 [2024-11-19 17:44:14.919075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.706 [2024-11-19 17:44:14.919084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.924280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.924303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.924312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.929622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.929644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.929654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.934911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.934933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.934941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.940121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.940143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.940152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.945281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.945303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.945315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.950517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.950539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.950547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.955730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.955753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.955762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.960973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.960995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.961002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.966223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.966245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.966253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.971430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.971452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.971460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.976570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.976592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.976600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.981767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.981789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.981797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.987012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.987033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.967 [2024-11-19 17:44:14.987041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.967 [2024-11-19 17:44:14.992209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.967 [2024-11-19 17:44:14.992237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:14.992245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:14.997386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:14.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:14.997417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.003138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.003169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.009791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.009814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.009822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.017072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.017095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.017105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.023622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.023645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.023654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.030964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.030987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.030996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.038665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.038689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.038697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.045970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.045993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.046002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.053598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.053621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.053630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.061030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.061054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.061062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.068885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.068910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.068919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.076736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.076759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.076769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.084021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.084044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.084053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.091338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.091362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.091371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.098599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.098623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.098631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.106199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.106223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.106232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.114137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.114160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.114172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.121423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.121446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.121455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.128925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.128956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.128965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.135813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.135836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.135845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.142885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.142907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.142916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.150368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.150391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.150400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.156346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.156369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.156377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.161594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.161616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.161624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.166759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.166781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.166789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.171965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.968 [2024-11-19 17:44:15.171987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.968 [2024-11-19 17:44:15.171996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.968 [2024-11-19 17:44:15.177215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.969 [2024-11-19 17:44:15.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.969 [2024-11-19 17:44:15.177246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.969 [2024-11-19 17:44:15.182495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:12.969 [2024-11-19 17:44:15.182516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.969 [2024-11-19 17:44:15.182525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.187820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.187842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.187851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.193170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.193192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.193201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.198417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.198439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.198448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.203676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.203698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.203706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.208894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.208916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.208924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.214117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.214151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.220247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.220270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.220279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.226818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.226841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.226849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.233509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.233531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.233540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.241041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.241066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.241074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.248975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.249000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.249009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.256975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.256998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.257008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.265311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.265336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.265344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.271796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.271820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.271830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.277173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.277200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.277209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.282656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.282680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.282688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.287926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.287956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.287964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.293239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.293263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.293272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.298492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.298515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.298523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.303401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.303425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.303434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.308503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.308527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.308536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.313754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.229 [2024-11-19 17:44:15.313777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.229 [2024-11-19 17:44:15.313786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.229 [2024-11-19 17:44:15.319007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.230 [2024-11-19 17:44:15.319030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.230 [2024-11-19 17:44:15.319039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.230 [2024-11-19 17:44:15.324296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.230 [2024-11-19 17:44:15.324319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.230 [2024-11-19 17:44:15.324328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:13.230 [2024-11-19 17:44:15.329528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.230 [2024-11-19 17:44:15.329550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.230 [2024-11-19 17:44:15.329559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:13.230 [2024-11-19 17:44:15.334793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.230 [2024-11-19 17:44:15.334817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.230 [2024-11-19 17:44:15.334826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:13.230 [2024-11-19 17:44:15.340041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee3580) 00:26:13.230 [2024-11-19 17:44:15.340064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.230 [2024-11-19 17:44:15.340072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.230 5448.00 IOPS, 681.00 MiB/s 00:26:13.230 Latency(us) 00:26:13.230 [2024-11-19T16:44:15.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:13.230 nvme0n1 : 2.00 5451.68 681.46 0.00 0.00 2932.25 569.88 14246.96 00:26:13.230 [2024-11-19T16:44:15.453Z] =================================================================================================================== 00:26:13.230 [2024-11-19T16:44:15.453Z] Total : 5451.68 681.46 0.00 0.00 2932.25 569.88 14246.96 00:26:13.230 { 00:26:13.230 "results": [ 00:26:13.230 { 00:26:13.230 "job": "nvme0n1", 00:26:13.230 "core_mask": "0x2", 00:26:13.230 "workload": "randread", 00:26:13.230 "status": "finished", 00:26:13.230 "queue_depth": 16, 00:26:13.230 "io_size": 131072, 00:26:13.230 "runtime": 2.001584, 00:26:13.230 "iops": 5451.682267644026, 00:26:13.230 "mibps": 681.4602834555033, 00:26:13.230 "io_failed": 0, 00:26:13.230 "io_timeout": 0, 00:26:13.230 "avg_latency_us": 2932.246293510136, 00:26:13.230 "min_latency_us": 569.8782608695652, 00:26:13.230 "max_latency_us": 14246.95652173913 00:26:13.230 } 00:26:13.230 ], 00:26:13.230 "core_count": 1 00:26:13.230 } 00:26:13.230 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:13.230 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:13.230 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:13.230 | .driver_specific 00:26:13.230 | .nvme_error 00:26:13.230 | .status_code 00:26:13.230 | .command_transient_transport_error' 00:26:13.230 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 352 > 0 )) 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3603396 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3603396 ']' 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3603396 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603396 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603396' 00:26:13.490 killing process with pid 3603396 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3603396 00:26:13.490 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.490 00:26:13.490 Latency(us) 00:26:13.490 [2024-11-19T16:44:15.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.490 [2024-11-19T16:44:15.713Z] =================================================================================================================== 00:26:13.490 [2024-11-19T16:44:15.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.490 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3603396 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3604043 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3604043 /var/tmp/bperf.sock 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3604043 ']' 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.749 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 [2024-11-19 17:44:15.793883] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:13.749 [2024-11-19 17:44:15.793931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604043 ] 00:26:13.749 [2024-11-19 17:44:15.851428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.749 [2024-11-19 17:44:15.896128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.008 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.009 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:14.009 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.009 17:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.009 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.009 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.009 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.009 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.009 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.009 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.578 nvme0n1 00:26:14.578 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.578 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.578 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.578 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.578 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.578 17:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.578 Running I/O for 2 seconds... 00:26:14.578 [2024-11-19 17:44:16.773163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.578 [2024-11-19 17:44:16.773324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-11-19 17:44:16.773352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.578 [2024-11-19 17:44:16.783104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.578 [2024-11-19 17:44:16.783247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-11-19 17:44:16.783269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.578 [2024-11-19 17:44:16.792879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.578 [2024-11-19 17:44:16.793054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-11-19 17:44:16.793073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.802822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.802964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.802983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.812543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.812690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.812710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.822305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.822454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.822473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.832014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.832158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.832176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.841758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.841898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.841917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.851475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.851617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.861337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.861484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.861502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.871042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.871191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.871209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.880866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.881020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.881039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.890647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.890793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.890814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.900545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.900689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.900709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.838 [2024-11-19 17:44:16.910239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.838 [2024-11-19 17:44:16.910379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-11-19 17:44:16.910398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.919935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.920095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.920113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.929756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.929899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.929917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.939471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.939612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.939631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.949207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.949353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.949371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.958898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.959050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.959068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.968565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.968707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.968725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.978257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.978402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.978420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.988037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.988175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.988193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:16.997698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:16.997836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:16.997854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:17.007391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:17.007542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:17.007560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:17.017059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:17.017202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:17.017220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:17.026811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:17.026962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:17.026980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:17.036747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:17.036889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:17.036909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:17.046407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:17.046548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:17.046566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.839 [2024-11-19 17:44:17.056157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:14.839 [2024-11-19 17:44:17.056308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.839 [2024-11-19 17:44:17.056326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.065995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.066139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.066157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.075664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.075805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.075824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.085420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.085567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.085585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.095098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.095263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.104816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.104959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.104979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.114506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.114646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.114665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.124233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.124374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.124393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.133952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.134113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.134132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.143728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.143868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.143892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.153376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.153516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.153534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.163061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.163201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.163219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.172733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.172870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.172888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.182442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.182580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.182598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.192180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.192317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.192335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.201822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.201963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.201981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.211497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.211635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.211653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.221164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.221302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.221321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.230905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.231058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.231076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.240666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.240804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.240823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.250346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.250485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.250503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.259995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.260136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.260154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.269662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.269800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.269819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.279369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.279509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.279527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.289243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.289384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.289403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.298917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.299063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.098 [2024-11-19 17:44:17.299081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.098 [2024-11-19 17:44:17.308560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.098 [2024-11-19 17:44:17.308699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.099 [2024-11-19 17:44:17.308717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.318342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.318484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-19 17:44:17.318502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.328177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.328329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-19 17:44:17.328346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.337884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.338027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-19 17:44:17.338046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.347597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.347736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-19 17:44:17.347754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.357267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.357405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-19 17:44:17.357423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.367055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-19 17:44:17.367216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.357 [2024-11-19 17:44:17.376726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.357 [2024-11-19 17:44:17.376864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.376883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.386433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.386570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.386588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.396096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.396237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.396255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.405765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.405903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.405921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.415416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.415572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.425129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.425267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.425286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.434814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.434959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.434977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.444475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.444612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.444630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.454152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.454291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.454308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.463809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.463954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.473492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.473630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.473647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.483208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.483347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.483369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.492888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.493036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.493054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.502579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.502717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.502735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.512217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.512357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.512374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.521999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.522141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.522159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.531708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.531846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.531865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.541561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.541700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.541718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.551252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.551393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.551411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.560910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.561059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.561077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.358 [2024-11-19 17:44:17.570567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.358 [2024-11-19 17:44:17.570710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-19 17:44:17.570728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.580373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.580512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.580531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.590092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.590230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.590248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.599746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.599884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.599903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.609419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.609558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.609575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.619056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.619197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.619215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.628749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.628888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.628906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.638417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.638556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.638573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.648060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.648198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.648215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.657723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.657860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.657878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.667387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.667529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.667546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.677032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.677171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.677190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.686736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.686875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.686893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.696430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.696569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.696587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.706082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.706223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.706240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.715804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.715943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.715966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.725466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.618 [2024-11-19 17:44:17.725605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.618 [2024-11-19 17:44:17.725623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.618 [2024-11-19 17:44:17.735170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.735310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.735331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.744818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.744961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.744978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.754503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.754644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.754661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.764168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.764307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.764325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 26254.00 IOPS, 102.55 MiB/s [2024-11-19T16:44:17.842Z] [2024-11-19 17:44:17.773821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.774076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.774095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.783555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.783715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.793450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.793590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.793608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.803095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.803234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.803252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.812760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.812919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.822412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.822555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.822573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.619 [2024-11-19 17:44:17.832135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.619 [2024-11-19 17:44:17.832275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.619 [2024-11-19 17:44:17.832292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.842059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.842213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.842232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.851715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.851855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.851872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.861370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.861510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.861528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.871135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.871274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.871293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.880774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.880915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.880934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.890510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.890650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.890669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.900354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.900493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.910010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.910151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.910170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.919678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.919817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.919835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.929346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.929484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.929502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.939016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.939157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.939175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.948681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.948819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.948836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.879 [2024-11-19 17:44:17.958324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.879 [2024-11-19 17:44:17.958463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.879 [2024-11-19 17:44:17.958481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:17.968012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:17.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:17.968171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:17.977677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:17.977818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:17.977836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:17.987419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:17.987553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:17.987574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:17.997098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:17.997238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:17.997256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.006731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.006872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.006890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.016388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.016527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.026051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.026191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.026208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.035758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.045668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.045808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.045825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.055356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.055494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.055512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.065027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.065166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.065183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.074703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.074846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.074864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.084417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.084557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.084574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.880 [2024-11-19 17:44:18.094161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:15.880 [2024-11-19 17:44:18.094302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.880 [2024-11-19 17:44:18.094319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.104021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.104168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.104187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.113668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.113814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.113832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.123371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.123509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.123527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.133098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.133237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.133255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.142776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.142920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.142938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.152465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.152609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.152627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.162174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.162312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.162329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.171822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.171968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.171986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.181756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.181902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.181921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.191613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.191750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.191769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.201359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.201501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.201519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.211026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.211169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.211186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.220700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.220841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.140 [2024-11-19 17:44:18.220859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.140 [2024-11-19 17:44:18.230449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.140 [2024-11-19 17:44:18.230596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.230614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.240243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.240382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.240404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.249912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.250060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.250078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.259615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.259757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.259774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.269281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.269423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.269441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.278954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.279099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.279117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.288675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.288812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.288830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.298582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.298729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.298747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.308281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.308420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.308438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.317946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.318096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.318115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.327619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.327766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.327783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.337354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.337496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.337515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.347017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.347158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.347176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.141 [2024-11-19 17:44:18.356824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.141 [2024-11-19 17:44:18.356992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.141 [2024-11-19 17:44:18.357010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.400 [2024-11-19 17:44:18.366716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.400 [2024-11-19 17:44:18.366858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.400 [2024-11-19 17:44:18.366876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.400 [2024-11-19 17:44:18.376397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.400 [2024-11-19 17:44:18.376538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.400 [2024-11-19 17:44:18.376556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.400 [2024-11-19 17:44:18.386157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.400 [2024-11-19 17:44:18.386297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.400 [2024-11-19 17:44:18.386315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.400 [2024-11-19 17:44:18.395829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.400 [2024-11-19 17:44:18.395974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.400 [2024-11-19 17:44:18.395992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.400 [2024-11-19 17:44:18.405525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.400 [2024-11-19 17:44:18.405663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.400 [2024-11-19 17:44:18.405681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.400 [2024-11-19 17:44:18.415228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.400 [2024-11-19 17:44:18.415367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.415385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.424869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.425018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.425037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.434549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.434688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.434706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.444239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.444379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.444398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.453888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.454038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.454056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.463584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.463722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.463741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.473260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.473401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.473419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.482962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.483105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.492676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.492814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.492834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.502336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.502477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.512029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.512169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.512188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.521757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.521898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.521916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.531401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.531541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.531559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.541111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.541249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.541267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.551002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.551145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.551163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.560683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.560824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.560842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.570351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.570491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.570508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.580033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.580176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.580194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.589755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.589896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.589914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.599453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.599591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.599609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.609101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.609242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.609260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.401 [2024-11-19 17:44:18.618780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.401 [2024-11-19 17:44:18.618918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.401 [2024-11-19 17:44:18.618936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.628559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.628698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.628716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.638219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.638358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.638377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.647893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.648040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.648058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.657554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.657694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.667236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.667393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.676920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.677067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.677085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.686620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.686758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.686776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.696267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.696408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.696426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.705922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.706069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.706087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.715610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.715750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.715768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.725268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.725408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.725426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.735061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.735202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.735220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.744771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.744910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.744931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.754445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.754584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.754602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 [2024-11-19 17:44:18.764101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000166fdeb0 00:26:16.661 [2024-11-19 17:44:18.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.661 [2024-11-19 17:44:18.764260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.661 26295.50 IOPS, 102.72 MiB/s 00:26:16.661 Latency(us) 00:26:16.661 [2024-11-19T16:44:18.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:16.661 nvme0n1 : 2.01 26296.30 102.72 0.00 0.00 4859.38 2521.71 10257.81 00:26:16.661 [2024-11-19T16:44:18.884Z] =================================================================================================================== 00:26:16.661 [2024-11-19T16:44:18.884Z] Total : 26296.30 102.72 0.00 0.00 4859.38 2521.71 10257.81 00:26:16.661 { 00:26:16.661 "results": [ 00:26:16.661 { 00:26:16.661 "job": "nvme0n1", 00:26:16.661 "core_mask": "0x2", 00:26:16.661 "workload": "randwrite", 00:26:16.661 "status": "finished", 00:26:16.661 "queue_depth": 128, 00:26:16.661 "io_size": 4096, 00:26:16.661 "runtime": 2.006024, 00:26:16.661 "iops": 26296.295557779966, 00:26:16.661 "mibps": 102.71990452257799, 00:26:16.661 "io_failed": 0, 00:26:16.661 "io_timeout": 0, 00:26:16.661 "avg_latency_us": 4859.37770140768, 00:26:16.661 "min_latency_us": 2521.711304347826, 00:26:16.661 "max_latency_us": 10257.808695652175 00:26:16.661 } 00:26:16.661 ], 00:26:16.661 "core_count": 1 00:26:16.661 } 00:26:16.661 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.661 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.661 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.661 | .driver_specific 00:26:16.661 | .nvme_error 00:26:16.661 | .status_code 00:26:16.662 | .command_transient_transport_error' 00:26:16.662 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.921 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:26:16.921 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3604043 00:26:16.921 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3604043 ']' 00:26:16.921 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3604043 00:26:16.921 17:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604043 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604043' 00:26:16.921 killing process with pid 3604043 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3604043 00:26:16.921 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.921 00:26:16.921 Latency(us) 00:26:16.921 [2024-11-19T16:44:19.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.921 [2024-11-19T16:44:19.144Z] =================================================================================================================== 00:26:16.921 [2024-11-19T16:44:19.144Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.921 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3604043 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3604524 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3604524 /var/tmp/bperf.sock 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3604524 ']' 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.179 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.179 [2024-11-19 17:44:19.238352] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:17.179 [2024-11-19 17:44:19.238399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604524 ] 00:26:17.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.179 Zero copy mechanism will not be used. 00:26:17.179 [2024-11-19 17:44:19.295781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.179 [2024-11-19 17:44:19.340151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.437 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.695 nvme0n1 00:26:17.955 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:17.955 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.955 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.955 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.955 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.955 17:44:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.955 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.955 Zero copy mechanism will not be used. 00:26:17.955 Running I/O for 2 seconds... 00:26:17.955 [2024-11-19 17:44:20.030431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.030509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.030538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.036839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.036925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.036957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.042113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.042209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.042233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.047797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.047878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.047898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.053128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.053224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.053244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.058572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.058710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.058730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.063993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.064069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.064089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.069513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.069603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.069623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.075718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.075809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.075829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.081447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.081578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.081598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.086759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.086832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.086851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.092369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.092543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.092562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.099336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.099496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.099518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.105521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.105600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.105620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.110919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.110982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.111001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.116130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.116232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.116251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.122229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.122393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.122413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.128441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.128545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.128564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.134765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.134930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.141094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.141242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.141262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.147280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.147426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.147446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.955 [2024-11-19 17:44:20.154088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.955 [2024-11-19 17:44:20.154248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.955 [2024-11-19 17:44:20.154267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.956 [2024-11-19 17:44:20.160172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.956 [2024-11-19 17:44:20.160268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.956 [2024-11-19 17:44:20.160292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.956 [2024-11-19 17:44:20.165484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.956 [2024-11-19 17:44:20.165545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.956 [2024-11-19 17:44:20.165564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.956 [2024-11-19 17:44:20.170606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:17.956 [2024-11-19 17:44:20.170748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.956 [2024-11-19 17:44:20.170768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.216 [2024-11-19 17:44:20.176121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.216 [2024-11-19 17:44:20.176206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.216 [2024-11-19 17:44:20.176225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.216 [2024-11-19 17:44:20.181378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.216 [2024-11-19 17:44:20.181442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.216 [2024-11-19 17:44:20.181463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.186135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.186222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.186241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.190986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.191071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.191091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.196230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.196307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.196327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.201033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.201090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.201108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.205662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.205745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.205765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.210522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.210580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.210599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.215981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.216061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.216080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.221587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.221708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.221727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.226728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.226781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.226800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.231724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.231808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.231827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.236650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.236784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.236802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.242450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.242521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.242540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.247517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.247579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.247598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.252988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.253054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.258701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.258772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.258792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.263746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.263831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.263850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.268786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.268857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.268877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.274075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.274191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.279847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.279905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.279924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.285435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.285495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.285514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.290919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.290990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.291010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.296283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.296508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.296537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.301559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.301820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.301841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.307089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.307355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.307376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.312138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.312407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.317448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.317708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.217 [2024-11-19 17:44:20.317729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.217 [2024-11-19 17:44:20.322655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.217 [2024-11-19 17:44:20.322971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.322991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.328297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.328555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.328576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.333047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.333304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.333324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.337508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.337772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.337793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.342025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.342291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.342311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.346646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.346918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.346939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.351258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.351524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.351544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.355903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.356126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.356147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.360400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.360651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.365058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.365309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.365329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.369693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.369952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.369972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.374157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.374430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.374450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.378654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.378901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.378921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.384561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.384870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.384891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.391250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.391511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.391532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.397473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.397825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.397846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.404819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.405074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.405095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.410054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.410293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.410314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.414726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.414983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.415005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.419223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.419464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.419485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.423795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.424051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.424071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.428402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.428655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.428679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.218 [2024-11-19 17:44:20.432814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.218 [2024-11-19 17:44:20.433077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.218 [2024-11-19 17:44:20.433098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.437471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.437732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.437753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.442714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.442990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.443011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.448007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.448258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.448279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.453098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.453340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.453360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.458513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.458764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.458785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.464667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.464966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.464987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.470074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.470312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.470333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.474538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.474789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.474810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.478931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.479189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.479209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.483195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.483447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.483466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.487507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.479 [2024-11-19 17:44:20.487773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.479 [2024-11-19 17:44:20.487792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.479 [2024-11-19 17:44:20.491806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.492067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.492087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.496019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.496286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.496306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.500834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.501331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.501352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.507030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.507316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.507336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.512598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.512846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.512867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.518013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.518264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.518284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.524077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.524325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.524346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.529230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.529479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.529499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.534460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.534705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.534725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.539335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.539578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.539596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.544075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.544331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.544351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.548671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.548927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.548953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.553084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.553328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.553348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.557477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.557724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.557749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.561809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.562061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.562082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.566218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.566466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.566486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.570589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.570837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.570857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.574892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.575152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.575172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.579238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.579489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.579509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.583547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.583801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.480 [2024-11-19 17:44:20.583821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.480 [2024-11-19 17:44:20.588171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.480 [2024-11-19 17:44:20.588413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.588434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.592733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.592984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.593004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.597125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.597379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.597399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.601457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.601708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.601728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.605771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.606021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.606041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.610104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.610351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.610372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.614377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.614630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.614650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.618651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.618900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.618920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.622908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.623166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.623187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.627520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.627774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.627793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.631911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.632193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.632213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.636444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.636692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.636712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.641261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.641498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.641518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.646581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.646829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.646849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.651956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.652207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.652227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.657252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.657489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.657509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.662065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.662298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.662318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.666486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.666737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.666757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.670933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.671180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.671200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.675374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.675626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.675648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.679997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.680232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.680252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.684516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.684759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.684779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.689050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.689293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.481 [2024-11-19 17:44:20.689314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.481 [2024-11-19 17:44:20.693421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.481 [2024-11-19 17:44:20.693670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.482 [2024-11-19 17:44:20.693690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.697937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.698274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.698294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.703778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.704047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.704068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.709087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.709191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.709210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.713898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.714150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.714170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.719362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.719620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.719640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.726776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.727052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.732794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.733045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.733066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.738452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.738693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.738712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.743107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.743350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.743370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.747710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.747954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.747975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.752405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.752645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.752665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.757019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.757253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.757272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.761579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.761839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.761859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.765962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.766209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.766230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.770480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.770727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.770747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.775159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.775410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.775430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.780129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.780376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.780396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.785037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.785273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.785293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.790318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.790548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.790567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.795417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.795692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.795712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.800358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.800619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.800641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.742 [2024-11-19 17:44:20.805085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.742 [2024-11-19 17:44:20.805333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.742 [2024-11-19 17:44:20.805357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.810234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.810464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.810484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.815218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.815462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.815483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.819927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.820179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.820199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.824556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.824800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.824820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.829995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.830238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.830258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.834967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.835216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.835236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.840236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.840482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.840502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.844744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.844999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.845020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.849703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.849954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.849974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.854673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.854911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.854932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.859329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.859579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.859600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.863838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.864081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.864101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.870003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.870241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.870261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.875333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.875572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.875592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.880330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.880579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.880600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.884858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.885108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.885129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.889259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.889505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.889525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.893805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.894058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.894079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.898339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.898576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.898596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.903540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.903900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.903921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.909561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.909831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.909852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.915624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.915918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.915938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.921983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.922308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.922329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.927982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.743 [2024-11-19 17:44:20.928274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.743 [2024-11-19 17:44:20.928294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.743 [2024-11-19 17:44:20.933891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.744 [2024-11-19 17:44:20.934241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.744 [2024-11-19 17:44:20.934261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.744 [2024-11-19 17:44:20.940106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.744 [2024-11-19 17:44:20.940445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.744 [2024-11-19 17:44:20.940469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.744 [2024-11-19 17:44:20.946577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.744 [2024-11-19 17:44:20.946957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.744 [2024-11-19 17:44:20.946978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.744 [2024-11-19 17:44:20.952583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.744 [2024-11-19 17:44:20.952902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.744 [2024-11-19 17:44:20.952923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.744 [2024-11-19 17:44:20.958832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:18.744 [2024-11-19 17:44:20.959155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.744 [2024-11-19 17:44:20.959177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:20.965080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:20.965391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:20.965412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:20.971753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:20.972077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:20.972097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:20.978884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:20.979162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:20.979183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:20.984870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:20.985158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:20.985179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:20.990791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:20.991122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:20.991143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:20.996386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:20.996625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:20.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.001037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.001277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.001299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.005438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.005679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.005700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.009823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.010074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.010095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.014246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.014493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.014512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.018603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.018854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.018875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.022957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.023202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.023221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.005 6008.00 IOPS, 751.00 MiB/s [2024-11-19T16:44:21.228Z] [2024-11-19 17:44:21.028143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.028331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.028350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.032214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.032380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.032399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.036347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.036519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.036539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.040894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.041073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.045915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.046116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.046137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.050449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.050609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.050630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.054746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.054918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.054937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.058956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.059138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.059157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.063258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.063430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.063451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.005 [2024-11-19 17:44:21.067536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.005 [2024-11-19 17:44:21.067697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.005 [2024-11-19 17:44:21.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.071677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.071845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.071868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.075880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.076047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.076066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.080309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.080482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.080502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.084548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.084713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.084733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.088837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.089019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.089038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.093056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.093225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.093246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.097202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.097366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.097385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.101407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.101569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.101589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.105609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.105804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.109816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.109995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.110015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.114080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.114255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.114275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.118276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.118441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.118460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.122211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.122398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.122417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.126345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.126535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.126555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.131505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.131672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.131691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.136339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.136502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.136520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.140581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.140751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.140769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.144730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.144895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.144913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.148932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.149101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.149121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.152973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.153115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.153134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.157184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.157310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.157328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.006 [2024-11-19 17:44:21.161996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.006 [2024-11-19 17:44:21.162173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.006 [2024-11-19 17:44:21.162193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.166863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.166999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.167020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.170997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.171134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.171155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.175076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.175225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.175246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.179126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.179272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.179291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.183299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.183447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.183470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.187334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.187482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.187500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.191629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.191761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.191780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.196160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.196287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.196306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.200797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.200939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.200965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.205872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.206021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.206040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.210454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.210580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.210599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.215455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.215598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.215617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.007 [2024-11-19 17:44:21.219530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.007 [2024-11-19 17:44:21.219661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.007 [2024-11-19 17:44:21.219679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.223631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.223784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.223805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.227699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.227861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.227879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.232724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.232988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.238251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.238468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.238490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.244370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.244515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.251053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.251224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.251244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.257855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.257966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.257987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.263195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.263301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.263320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.267993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.268088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.268108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.272519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.272603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.272622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.276609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.276697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.276715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.280627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.280721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.280740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.284709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.284799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.284818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.288811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.288907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.288926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.293069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.293143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.293162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.297358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.297456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.297476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.302359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.302436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.302455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.307066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.307171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.307194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.311167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.311268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.311288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.315235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.315335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.315355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.268 [2024-11-19 17:44:21.319371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.268 [2024-11-19 17:44:21.319464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.268 [2024-11-19 17:44:21.319483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.323384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.323513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.323531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.327386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.327485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.327503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.331430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.331511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.331530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.335960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.336034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.336053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.340659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.340737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.340756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.344905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.345005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.345025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.348868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.348962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.348983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.352834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.352937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.352964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.356870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.356971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.356991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.361171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.361258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.361277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.365580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.365713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.365733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.370338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.370431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.370450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.374851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.374993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.375011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.379507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.379618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.379637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.384291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.384392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.384411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.388502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.388606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.388625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.392424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.392523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.392541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.396482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.396579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.396597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.400534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.400619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.400638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.404573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.404669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.404688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.408651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.408740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.408759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.269 [2024-11-19 17:44:21.412547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.269 [2024-11-19 17:44:21.412663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.269 [2024-11-19 17:44:21.412681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.416635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.416756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.421313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.421410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.421429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.425732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.425844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.425862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.430550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.430635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.430654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.435098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.435172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.435191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.439145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.439250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.439268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.443116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.443209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.443231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.447086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.447176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.447194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.451112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.451191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.451210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.455221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.455321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.455339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.459307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.459407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.459425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.463352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.463443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.463462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.467335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.467445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.467464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.471399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.471486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.471504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.475425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.475545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.475564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.479444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.479539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.479558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.270 [2024-11-19 17:44:21.483493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.270 [2024-11-19 17:44:21.483602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.270 [2024-11-19 17:44:21.483620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.487542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.487642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.487661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.491634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.491747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.491766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.495714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.495813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.495832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.499701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.499809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.499828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.503734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.503828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.503846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.507755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.507881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.507899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.511692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.511774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.511793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.515662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.515760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.515779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.519597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.519709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.519727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.524007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.531 [2024-11-19 17:44:21.524093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.531 [2024-11-19 17:44:21.524116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.531 [2024-11-19 17:44:21.528805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.528886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.528905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.533323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.533421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.533439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.537309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.537410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.537428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.541381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.541479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.541497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.545315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.545408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.545427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.549411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.549507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.549526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.553921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.554030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.554049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.558365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.558468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.558486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.562512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.562623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.562642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.566536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.566635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.566654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.570552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.570661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.570679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.574549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.574649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.574668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.578508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.578607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.578626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.582528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.582621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.582640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.586521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.586635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.586653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.590550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.590652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.590671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.595205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.595295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.595313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.599658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.599750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.599769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.604210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.604327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.604345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.608825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.608936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.608960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.613547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.613639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.617777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.617872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.621818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.621897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.532 [2024-11-19 17:44:21.621915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.532 [2024-11-19 17:44:21.625787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.532 [2024-11-19 17:44:21.625883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.625903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.629898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.629995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.630014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.634532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.634648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.634671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.639914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.640098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.640117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.645637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.645768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.645787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.649809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.649914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.649933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.654160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.654291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.654310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.658276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.658420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.658438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.662276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.662434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.662454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.666323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.666428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.666446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.671452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.671608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.671627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.676527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.676686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.676705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.681668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.681795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.681814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.686895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.687011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.687031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.691937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.692077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.692096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.697413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.697520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.697538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.701974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.702099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.702118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.705994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.706099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.706118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.710215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.710300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.710319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.714284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.714395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.714414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.718347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.718446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.718465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.722627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.722723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.722742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.533 [2024-11-19 17:44:21.726677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.533 [2024-11-19 17:44:21.726773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.533 [2024-11-19 17:44:21.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.534 [2024-11-19 17:44:21.730816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.534 [2024-11-19 17:44:21.730919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.534 [2024-11-19 17:44:21.730937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.534 [2024-11-19 17:44:21.735042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.534 [2024-11-19 17:44:21.735184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.534 [2024-11-19 17:44:21.735204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.534 [2024-11-19 17:44:21.739138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.534 [2024-11-19 17:44:21.739271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.534 [2024-11-19 17:44:21.739289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.534 [2024-11-19 17:44:21.743530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.534 [2024-11-19 17:44:21.743664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.534 [2024-11-19 17:44:21.743683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.534 [2024-11-19 17:44:21.748212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.534 [2024-11-19 17:44:21.748275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.534 [2024-11-19 17:44:21.748293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.794 [2024-11-19 17:44:21.752805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.794 [2024-11-19 17:44:21.752871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.794 [2024-11-19 17:44:21.752894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.794 [2024-11-19 17:44:21.757538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.794 [2024-11-19 17:44:21.757613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.794 [2024-11-19 17:44:21.757632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.794 [2024-11-19 17:44:21.762470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.794 [2024-11-19 17:44:21.762532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.794 [2024-11-19 17:44:21.762550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.794 [2024-11-19 17:44:21.766857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.794 [2024-11-19 17:44:21.766977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.794 [2024-11-19 17:44:21.766996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.794 [2024-11-19 17:44:21.770994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.794 [2024-11-19 17:44:21.771063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.774960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.775027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.775046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.779071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.779145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.779164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.783139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.783219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.783237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.787132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.787210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.787229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.791112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.791185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.791205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.795556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.795642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.795661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.800255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.800333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.800353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.804408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.804478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.804496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.808513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.808581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.808600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.812611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.812700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.816692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.816821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.816840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.820776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.820847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.820867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.824845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.824932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.824957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.829072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.829138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.829157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.833175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.833248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.833266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.837203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.837258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.837276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.841659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.841774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.841793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.845764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.845873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.845893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.850563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.850755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.850775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.856041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.856208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.862368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.862598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.862619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.868968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.795 [2024-11-19 17:44:21.869134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.795 [2024-11-19 17:44:21.869157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.795 [2024-11-19 17:44:21.875931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.876111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.876131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.883048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.883274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.883294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.889931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.890111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.890130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.896564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.896716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.896736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.903197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.903377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.903396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.910111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.910345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.910365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.916805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.916982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.917002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.923647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.923812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.923830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.930384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.930546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.936970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.937123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.937142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.943206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.943333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.943352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.948742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.948869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.948887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.952800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.952927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.952946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.956875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.957015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.957033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.960880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.961018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.961037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.964995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.965123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.965142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.969129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.969267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.969286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.973238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.973365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.973383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.977338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.977471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.977490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.981287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.981419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.981438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.985407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.985532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.990095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.990229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.990248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.995026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.995152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.995169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.796 [2024-11-19 17:44:21.999150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.796 [2024-11-19 17:44:21.999283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.796 [2024-11-19 17:44:21.999302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.797 [2024-11-19 17:44:22.003252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.797 [2024-11-19 17:44:22.003376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.797 [2024-11-19 17:44:22.003395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.797 [2024-11-19 17:44:22.007259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.797 [2024-11-19 17:44:22.007387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.797 [2024-11-19 17:44:22.007410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.797 [2024-11-19 17:44:22.011351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:19.797 [2024-11-19 17:44:22.011477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.797 [2024-11-19 17:44:22.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:20.056 [2024-11-19 17:44:22.015413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:20.056 [2024-11-19 17:44:22.015541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.056 [2024-11-19 17:44:22.015560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:20.056 [2024-11-19 17:44:22.019487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:20.056 [2024-11-19 17:44:22.019624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.056 [2024-11-19 17:44:22.019643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:20.056 [2024-11-19 17:44:22.023509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:20.056 [2024-11-19 17:44:22.023636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.056 [2024-11-19 17:44:22.023654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:20.056 [2024-11-19 17:44:22.027593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ceb20) with pdu=0x2000166ff3c8 00:26:20.056 [2024-11-19 17:44:22.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.056 [2024-11-19 17:44:22.027737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:20.056 6468.50 IOPS, 808.56 MiB/s 00:26:20.056 Latency(us) 00:26:20.056 [2024-11-19T16:44:22.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.056 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:20.056 nvme0n1 : 2.00 6467.31 808.41 0.00 0.00 2469.92 1716.76 9801.91 00:26:20.056 [2024-11-19T16:44:22.279Z] =================================================================================================================== 00:26:20.056 [2024-11-19T16:44:22.279Z] Total : 6467.31 808.41 0.00 0.00 2469.92 1716.76 9801.91 00:26:20.056 { 00:26:20.056 "results": [ 00:26:20.056 { 00:26:20.056 "job": "nvme0n1", 00:26:20.056 "core_mask": "0x2", 00:26:20.056 "workload": "randwrite", 00:26:20.056 "status": "finished", 00:26:20.056 "queue_depth": 16, 00:26:20.056 "io_size": 131072, 00:26:20.056 "runtime": 2.002843, 00:26:20.056 "iops": 6467.306723492556, 00:26:20.056 "mibps": 808.4133404365695, 00:26:20.056 "io_failed": 0, 00:26:20.056 "io_timeout": 0, 00:26:20.056 "avg_latency_us": 2469.9210381345265, 00:26:20.056 "min_latency_us": 1716.7582608695652, 00:26:20.056 "max_latency_us": 9801.906086956522 00:26:20.056 } 00:26:20.056 ], 00:26:20.057 "core_count": 1 00:26:20.057 } 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:20.057 | .driver_specific 00:26:20.057 | .nvme_error 00:26:20.057 | .status_code 00:26:20.057 | .command_transient_transport_error' 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 418 > 0 )) 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3604524 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3604524 ']' 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3604524 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.057 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604524 00:26:20.316 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.316 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.316 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604524' 00:26:20.317 killing process with pid 3604524 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3604524 00:26:20.317 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.317 00:26:20.317 Latency(us) 00:26:20.317 [2024-11-19T16:44:22.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.317 [2024-11-19T16:44:22.540Z] =================================================================================================================== 00:26:20.317 [2024-11-19T16:44:22.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3604524 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3602859 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3602859 ']' 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3602859 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602859 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602859' 00:26:20.317 killing process with pid 3602859 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3602859 00:26:20.317 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3602859 00:26:20.576 00:26:20.576 real 0m13.881s 00:26:20.576 user 0m26.585s 00:26:20.576 sys 0m4.584s 00:26:20.576 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.576 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.576 ************************************ 00:26:20.576 END TEST nvmf_digest_error 00:26:20.577 ************************************ 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.577 rmmod nvme_tcp 00:26:20.577 rmmod nvme_fabrics 00:26:20.577 rmmod nvme_keyring 00:26:20.577 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3602859 ']' 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3602859 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3602859 ']' 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3602859 00:26:20.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3602859) - No such process 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3602859 is not found' 00:26:20.836 Process with pid 3602859 is not found 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:20.836 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.837 17:44:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.742 17:44:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.742 00:26:22.742 real 0m36.839s 00:26:22.742 user 0m55.876s 00:26:22.742 sys 0m13.722s 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:22.743 ************************************ 00:26:22.743 END TEST nvmf_digest 00:26:22.743 ************************************ 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.743 ************************************ 00:26:22.743 START TEST nvmf_bdevperf 00:26:22.743 ************************************ 00:26:22.743 17:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:23.003 * Looking for test storage... 00:26:23.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.003 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:23.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.003 --rc genhtml_branch_coverage=1 00:26:23.003 --rc genhtml_function_coverage=1 00:26:23.003 --rc genhtml_legend=1 00:26:23.003 --rc geninfo_all_blocks=1 00:26:23.003 --rc geninfo_unexecuted_blocks=1 00:26:23.004 00:26:23.004 ' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.004 --rc genhtml_branch_coverage=1 00:26:23.004 --rc genhtml_function_coverage=1 00:26:23.004 --rc genhtml_legend=1 00:26:23.004 --rc geninfo_all_blocks=1 00:26:23.004 --rc geninfo_unexecuted_blocks=1 00:26:23.004 00:26:23.004 ' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.004 --rc genhtml_branch_coverage=1 00:26:23.004 --rc genhtml_function_coverage=1 00:26:23.004 --rc genhtml_legend=1 00:26:23.004 --rc geninfo_all_blocks=1 00:26:23.004 --rc geninfo_unexecuted_blocks=1 00:26:23.004 00:26:23.004 ' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.004 --rc genhtml_branch_coverage=1 00:26:23.004 --rc genhtml_function_coverage=1 00:26:23.004 --rc genhtml_legend=1 00:26:23.004 --rc geninfo_all_blocks=1 00:26:23.004 --rc geninfo_unexecuted_blocks=1 00:26:23.004 00:26:23.004 ' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.004 17:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.581 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:29.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:29.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:29.582 Found net devices under 0000:86:00.0: cvl_0_0 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:29.582 Found net devices under 0000:86:00.1: cvl_0_1 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.582 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.583 17:44:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:26:29.583 00:26:29.583 --- 10.0.0.2 ping statistics --- 00:26:29.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.583 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:26:29.583 00:26:29.583 --- 10.0.0.1 ping statistics --- 00:26:29.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.583 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3608528 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3608528 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3608528 ']' 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.583 [2024-11-19 17:44:31.150542] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:29.583 [2024-11-19 17:44:31.150596] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.583 [2024-11-19 17:44:31.235857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:29.583 [2024-11-19 17:44:31.282663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.583 [2024-11-19 17:44:31.282697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.583 [2024-11-19 17:44:31.282705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.583 [2024-11-19 17:44:31.282711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.583 [2024-11-19 17:44:31.282716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.583 [2024-11-19 17:44:31.284206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.583 [2024-11-19 17:44:31.284334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.583 [2024-11-19 17:44:31.284334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.583 [2024-11-19 17:44:31.420980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.583 Malloc0 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.583 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.584 [2024-11-19 17:44:31.488141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.584 { 00:26:29.584 "params": { 00:26:29.584 "name": "Nvme$subsystem", 00:26:29.584 "trtype": "$TEST_TRANSPORT", 00:26:29.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.584 "adrfam": "ipv4", 00:26:29.584 "trsvcid": "$NVMF_PORT", 00:26:29.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.584 "hdgst": ${hdgst:-false}, 00:26:29.584 "ddgst": ${ddgst:-false} 00:26:29.584 }, 00:26:29.584 "method": "bdev_nvme_attach_controller" 00:26:29.584 } 00:26:29.584 EOF 00:26:29.584 )") 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:29.584 17:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:29.584 "params": { 00:26:29.584 "name": "Nvme1", 00:26:29.584 "trtype": "tcp", 00:26:29.584 "traddr": "10.0.0.2", 00:26:29.584 "adrfam": "ipv4", 00:26:29.584 "trsvcid": "4420", 00:26:29.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.584 "hdgst": false, 00:26:29.584 "ddgst": false 00:26:29.584 }, 00:26:29.584 "method": "bdev_nvme_attach_controller" 00:26:29.584 }' 00:26:29.584 [2024-11-19 17:44:31.540709] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:29.584 [2024-11-19 17:44:31.540753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608718 ] 00:26:29.584 [2024-11-19 17:44:31.616488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.584 [2024-11-19 17:44:31.658110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.843 Running I/O for 1 seconds... 00:26:30.781 11229.00 IOPS, 43.86 MiB/s 00:26:30.781 Latency(us) 00:26:30.781 [2024-11-19T16:44:33.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.781 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:30.781 Verification LBA range: start 0x0 length 0x4000 00:26:30.781 Nvme1n1 : 1.01 11270.85 44.03 0.00 0.00 11312.33 2450.48 12708.29 00:26:30.781 [2024-11-19T16:44:33.004Z] =================================================================================================================== 00:26:30.781 [2024-11-19T16:44:33.004Z] Total : 11270.85 44.03 0.00 0.00 11312.33 2450.48 12708.29 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3609001 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.041 { 00:26:31.041 "params": { 00:26:31.041 "name": "Nvme$subsystem", 00:26:31.041 "trtype": "$TEST_TRANSPORT", 00:26:31.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.041 "adrfam": "ipv4", 00:26:31.041 "trsvcid": "$NVMF_PORT", 00:26:31.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.041 "hdgst": ${hdgst:-false}, 00:26:31.041 "ddgst": ${ddgst:-false} 00:26:31.041 }, 00:26:31.041 "method": "bdev_nvme_attach_controller" 00:26:31.041 } 00:26:31.041 EOF 00:26:31.041 )") 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:31.041 17:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:31.041 "params": { 00:26:31.041 "name": "Nvme1", 00:26:31.041 "trtype": "tcp", 00:26:31.041 "traddr": "10.0.0.2", 00:26:31.041 "adrfam": "ipv4", 00:26:31.041 "trsvcid": "4420", 00:26:31.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:31.041 "hdgst": false, 00:26:31.041 "ddgst": false 00:26:31.041 }, 00:26:31.041 "method": "bdev_nvme_attach_controller" 00:26:31.041 }' 00:26:31.041 [2024-11-19 17:44:33.154650] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:31.041 [2024-11-19 17:44:33.154700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609001 ] 00:26:31.041 [2024-11-19 17:44:33.228485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.300 [2024-11-19 17:44:33.267463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.300 Running I/O for 15 seconds... 00:26:33.617 10973.00 IOPS, 42.86 MiB/s [2024-11-19T16:44:36.416Z] 11131.00 IOPS, 43.48 MiB/s [2024-11-19T16:44:36.416Z] 17:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3608528 00:26:34.193 17:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:34.193 [2024-11-19 17:44:36.123385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.193 [2024-11-19 17:44:36.123626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.193 [2024-11-19 17:44:36.123635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.123934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.123942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.194 [2024-11-19 17:44:36.124367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.194 [2024-11-19 17:44:36.124376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.195 [2024-11-19 17:44:36.124580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.195 [2024-11-19 17:44:36.124970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.195 [2024-11-19 17:44:36.124977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.124986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.124992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.196 [2024-11-19 17:44:36.125424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.196 [2024-11-19 17:44:36.125532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6cf0 is same with the state(6) to be set 00:26:34.196 [2024-11-19 17:44:36.125548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.196 [2024-11-19 17:44:36.125554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.196 [2024-11-19 17:44:36.125560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106984 len:8 PRP1 0x0 PRP2 0x0 00:26:34.196 [2024-11-19 17:44:36.125568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.196 [2024-11-19 17:44:36.125663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.196 [2024-11-19 17:44:36.125670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.197 [2024-11-19 17:44:36.125677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.197 [2024-11-19 17:44:36.125684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.197 [2024-11-19 17:44:36.125691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.197 [2024-11-19 17:44:36.125699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.197 [2024-11-19 17:44:36.125705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.197 [2024-11-19 17:44:36.125714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.128533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.128564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.129197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.129215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.129224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.129403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.129581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.129590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.129599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.129608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.141832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.142252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.142314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.142339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.142890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.143062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.143072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.143078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.143086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.154641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.155095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.155144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.155169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.155750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.156198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.156208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.156215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.156222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.167528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.167967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.168014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.168039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.168617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.169082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.169091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.169098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.169105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.180397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.180825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.180871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.180895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.181401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.181575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.181585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.181592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.181600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.193238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.193674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.193692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.193700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.193863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.194034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.194043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.194050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.194057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.206137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.206415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.206432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.206443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.206616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.206788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.206798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.206805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.206811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.219010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.219429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.219446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.219453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.219615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.219778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.219787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.197 [2024-11-19 17:44:36.219794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.197 [2024-11-19 17:44:36.219801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.197 [2024-11-19 17:44:36.231875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.197 [2024-11-19 17:44:36.232208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.197 [2024-11-19 17:44:36.232255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.197 [2024-11-19 17:44:36.232279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.197 [2024-11-19 17:44:36.232857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.197 [2024-11-19 17:44:36.233325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.197 [2024-11-19 17:44:36.233335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.233341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.233348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.244797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.245220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.245270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.245295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.245874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.246438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.246457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.246472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.246486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.259601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.260116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.260161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.260186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.260665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.260920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.260933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.260942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.260961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.272533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.272879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.272965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.273464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.273633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.273642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.273649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.273656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.285384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.285795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.285840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.285865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.286406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.286805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.286824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.286845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.286859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.300197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.300717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.300740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.300751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.301012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.301268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.301281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.301291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.301301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.313153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.313521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.313566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.313590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.314185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.314431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.314439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.314446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.314453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.326012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.326425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.326442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.326449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.326612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.198 [2024-11-19 17:44:36.326775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.198 [2024-11-19 17:44:36.326784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.198 [2024-11-19 17:44:36.326791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.198 [2024-11-19 17:44:36.326797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.198 [2024-11-19 17:44:36.338875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.198 [2024-11-19 17:44:36.339313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.198 [2024-11-19 17:44:36.339360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.198 [2024-11-19 17:44:36.339385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.198 [2024-11-19 17:44:36.339785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.199 [2024-11-19 17:44:36.339956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.199 [2024-11-19 17:44:36.339966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.199 [2024-11-19 17:44:36.339972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.199 [2024-11-19 17:44:36.339979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.199 [2024-11-19 17:44:36.351762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.199 [2024-11-19 17:44:36.352183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.199 [2024-11-19 17:44:36.352200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.199 [2024-11-19 17:44:36.352208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.199 [2024-11-19 17:44:36.352371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.199 [2024-11-19 17:44:36.352535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.199 [2024-11-19 17:44:36.352544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.199 [2024-11-19 17:44:36.352550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.199 [2024-11-19 17:44:36.352557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.199 [2024-11-19 17:44:36.364661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.199 [2024-11-19 17:44:36.364995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.199 [2024-11-19 17:44:36.365013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.199 [2024-11-19 17:44:36.365020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.199 [2024-11-19 17:44:36.365183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.199 [2024-11-19 17:44:36.365346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.199 [2024-11-19 17:44:36.365355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.199 [2024-11-19 17:44:36.365362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.199 [2024-11-19 17:44:36.365368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.199 [2024-11-19 17:44:36.377572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.199 [2024-11-19 17:44:36.377905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.199 [2024-11-19 17:44:36.377939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.199 [2024-11-19 17:44:36.377956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.199 [2024-11-19 17:44:36.378129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.199 [2024-11-19 17:44:36.378320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.199 [2024-11-19 17:44:36.378330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.199 [2024-11-19 17:44:36.378337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.199 [2024-11-19 17:44:36.378345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.199 [2024-11-19 17:44:36.390675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.199 [2024-11-19 17:44:36.391115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.199 [2024-11-19 17:44:36.391133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.199 [2024-11-19 17:44:36.391141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.199 [2024-11-19 17:44:36.391319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.199 [2024-11-19 17:44:36.391499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.199 [2024-11-19 17:44:36.391510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.199 [2024-11-19 17:44:36.391517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.199 [2024-11-19 17:44:36.391524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.199 [2024-11-19 17:44:36.403851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.199 [2024-11-19 17:44:36.404316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.199 [2024-11-19 17:44:36.404335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.199 [2024-11-19 17:44:36.404343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.199 [2024-11-19 17:44:36.404520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.199 [2024-11-19 17:44:36.404698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.199 [2024-11-19 17:44:36.404707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.199 [2024-11-19 17:44:36.404714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.199 [2024-11-19 17:44:36.404723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.538 [2024-11-19 17:44:36.416978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.538 [2024-11-19 17:44:36.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.538 [2024-11-19 17:44:36.417411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.538 [2024-11-19 17:44:36.417420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.538 [2024-11-19 17:44:36.417597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.538 [2024-11-19 17:44:36.417779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.538 [2024-11-19 17:44:36.417789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.538 [2024-11-19 17:44:36.417796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.538 [2024-11-19 17:44:36.417803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.538 [2024-11-19 17:44:36.430118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.538 [2024-11-19 17:44:36.430539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.538 [2024-11-19 17:44:36.430584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.538 [2024-11-19 17:44:36.430609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.538 [2024-11-19 17:44:36.431202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.431786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.431810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.431818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.431825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 9909.00 IOPS, 38.71 MiB/s [2024-11-19T16:44:36.762Z] [2024-11-19 17:44:36.444244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.444596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.444614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.444621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.444784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.444953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.444963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.444970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.444977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.457123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.457545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.457600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.457625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.458221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.458644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.458662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.458683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.458696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.471742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.472195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.472219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.472230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.472485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.472740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.472753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.472764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.472775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.484629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.485051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.485069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.485077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.485244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.485411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.485421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.485428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.485434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.497458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.497797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.497814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.497821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.497990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.498154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.498163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.498170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.498177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.510244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.510596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.510613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.510621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.510792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.510970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.510990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.510996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.511004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.523091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.523517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.523562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.523586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.524132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.524296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.524305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.524312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.524318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.535923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.536328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.536373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.536398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.536855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.537025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.537035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.537041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.537047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.548800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.549261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.549313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-11-19 17:44:36.549338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.539 [2024-11-19 17:44:36.549903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.539 [2024-11-19 17:44:36.550304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.539 [2024-11-19 17:44:36.550324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.539 [2024-11-19 17:44:36.550338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.539 [2024-11-19 17:44:36.550353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.539 [2024-11-19 17:44:36.563970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.539 [2024-11-19 17:44:36.564397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-11-19 17:44:36.564419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.564430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.564686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.564940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.564961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.564972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.564982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.577051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.577456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.577474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.577482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.577653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.577826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.577836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.577843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.577850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.589915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.590336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.590353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.590361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.590526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.590690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.590699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.590706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.590712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.602774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.603122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.603139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.603147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.603310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.603474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.603483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.603489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.603496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.615790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.616144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.616162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.616171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.616334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.616500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.616510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.616516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.616522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.628823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.629211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.629230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.629238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.629417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.629596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.629606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.629620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.629627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.641977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.642229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.642246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.642254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.642426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.642599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.642609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.642615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.642622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.655095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.655487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.655505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.655513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.655685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.655859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.655869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.655878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.655885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.667877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.668294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.668312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.668320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.668483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.668646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.668656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.668663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.668669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.680731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.681147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-11-19 17:44:36.681164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-11-19 17:44:36.681172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.540 [2024-11-19 17:44:36.681336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.540 [2024-11-19 17:44:36.681499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.540 [2024-11-19 17:44:36.681509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.540 [2024-11-19 17:44:36.681516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.540 [2024-11-19 17:44:36.681522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.540 [2024-11-19 17:44:36.693583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.540 [2024-11-19 17:44:36.693909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-11-19 17:44:36.693927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-11-19 17:44:36.693935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.541 [2024-11-19 17:44:36.694103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.541 [2024-11-19 17:44:36.694268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.541 [2024-11-19 17:44:36.694278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.541 [2024-11-19 17:44:36.694284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.541 [2024-11-19 17:44:36.694291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.541 [2024-11-19 17:44:36.706493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.541 [2024-11-19 17:44:36.706910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-11-19 17:44:36.706927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-11-19 17:44:36.706935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.541 [2024-11-19 17:44:36.707104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.541 [2024-11-19 17:44:36.707268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.541 [2024-11-19 17:44:36.707278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.541 [2024-11-19 17:44:36.707284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.541 [2024-11-19 17:44:36.707291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.541 [2024-11-19 17:44:36.719687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.541 [2024-11-19 17:44:36.720055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-11-19 17:44:36.720078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-11-19 17:44:36.720087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.541 [2024-11-19 17:44:36.720275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.541 [2024-11-19 17:44:36.720449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.541 [2024-11-19 17:44:36.720459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.541 [2024-11-19 17:44:36.720467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.541 [2024-11-19 17:44:36.720473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.541 [2024-11-19 17:44:36.732509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.541 [2024-11-19 17:44:36.732839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-11-19 17:44:36.732856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-11-19 17:44:36.732863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.541 [2024-11-19 17:44:36.733031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.541 [2024-11-19 17:44:36.733195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.541 [2024-11-19 17:44:36.733205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.541 [2024-11-19 17:44:36.733212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.541 [2024-11-19 17:44:36.733219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.541 [2024-11-19 17:44:36.745673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.541 [2024-11-19 17:44:36.746018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-11-19 17:44:36.746036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-11-19 17:44:36.746044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.541 [2024-11-19 17:44:36.746222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.541 [2024-11-19 17:44:36.746400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.541 [2024-11-19 17:44:36.746410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.541 [2024-11-19 17:44:36.746418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.541 [2024-11-19 17:44:36.746425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.826 [2024-11-19 17:44:36.758571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.826 [2024-11-19 17:44:36.758920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.826 [2024-11-19 17:44:36.758936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.826 [2024-11-19 17:44:36.758944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.826 [2024-11-19 17:44:36.759123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.826 [2024-11-19 17:44:36.759303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.759313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.759320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.759327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.771690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.772053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.772071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.772079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.772256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.772434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.772445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.772451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.772458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.784612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.785023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.785080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.785105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.785632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.785796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.785806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.785812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.785818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.797414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.797811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.797828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.797836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.798004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.798170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.798179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.798188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.798195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.810258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.810585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.810602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.810609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.810795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.810973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.810984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.810991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.810997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.823116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.823531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.823548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.823555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.823718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.823881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.823890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.823896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.823903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.835968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.836291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.836335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.836360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.836855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.837025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.837034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.837041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.837048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.848797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.849171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.849190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.849198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.849369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.849547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.849556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.849563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.827 [2024-11-19 17:44:36.849569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.827 [2024-11-19 17:44:36.861676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.827 [2024-11-19 17:44:36.862115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.827 [2024-11-19 17:44:36.862133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.827 [2024-11-19 17:44:36.862140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.827 [2024-11-19 17:44:36.862303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.827 [2024-11-19 17:44:36.862466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.827 [2024-11-19 17:44:36.862476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.827 [2024-11-19 17:44:36.862483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.862489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.874579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.874975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.875013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.875039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.875617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.876105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.876114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.876121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.876128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.887370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.887723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.887744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.887753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.887926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.888107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.888117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.888125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.888132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.900501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.900859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.900877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.900886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.901069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.901248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.901259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.901266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.901273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.913520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.913926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.913945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.913957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.914130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.914302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.914312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.914318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.914325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.926360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.926628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.926646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.926653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.926819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.926987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.926996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.927003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.927010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.939237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.939513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.939530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.939538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.939702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.939867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.939876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.939882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.939889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.952197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.952570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.952615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.952639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.953172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.953337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.953348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.953357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.953363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.965307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.965652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.965670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.965678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.965854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.966038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.966049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.966059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.828 [2024-11-19 17:44:36.966066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.828 [2024-11-19 17:44:36.978393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.828 [2024-11-19 17:44:36.978751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-11-19 17:44:36.978769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-11-19 17:44:36.978777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.828 [2024-11-19 17:44:36.978954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.828 [2024-11-19 17:44:36.979139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.828 [2024-11-19 17:44:36.979148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.828 [2024-11-19 17:44:36.979155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.829 [2024-11-19 17:44:36.979161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.829 [2024-11-19 17:44:36.991231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.829 [2024-11-19 17:44:36.991552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.829 [2024-11-19 17:44:36.991568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.829 [2024-11-19 17:44:36.991576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.829 [2024-11-19 17:44:36.991739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.829 [2024-11-19 17:44:36.991902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.829 [2024-11-19 17:44:36.991911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.829 [2024-11-19 17:44:36.991918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.829 [2024-11-19 17:44:36.991924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.829 [2024-11-19 17:44:37.004097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.829 [2024-11-19 17:44:37.004380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.829 [2024-11-19 17:44:37.004398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.829 [2024-11-19 17:44:37.004405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.829 [2024-11-19 17:44:37.004568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.829 [2024-11-19 17:44:37.004732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.829 [2024-11-19 17:44:37.004741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.829 [2024-11-19 17:44:37.004747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.829 [2024-11-19 17:44:37.004753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.829 [2024-11-19 17:44:37.017023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.829 [2024-11-19 17:44:37.017422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.829 [2024-11-19 17:44:37.017439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.829 [2024-11-19 17:44:37.017446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.829 [2024-11-19 17:44:37.017610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.829 [2024-11-19 17:44:37.017773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.829 [2024-11-19 17:44:37.017782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.829 [2024-11-19 17:44:37.017789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.829 [2024-11-19 17:44:37.017795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.829 [2024-11-19 17:44:37.029875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.829 [2024-11-19 17:44:37.030204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.829 [2024-11-19 17:44:37.030222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.829 [2024-11-19 17:44:37.030229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.829 [2024-11-19 17:44:37.030392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.829 [2024-11-19 17:44:37.030556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.829 [2024-11-19 17:44:37.030565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.829 [2024-11-19 17:44:37.030571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.829 [2024-11-19 17:44:37.030578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:34.829 [2024-11-19 17:44:37.042894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:34.829 [2024-11-19 17:44:37.043258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.829 [2024-11-19 17:44:37.043276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:34.829 [2024-11-19 17:44:37.043284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:34.829 [2024-11-19 17:44:37.043455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:34.829 [2024-11-19 17:44:37.043628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:34.829 [2024-11-19 17:44:37.043638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:34.829 [2024-11-19 17:44:37.043645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:34.829 [2024-11-19 17:44:37.043651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.090 [2024-11-19 17:44:37.055796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.090 [2024-11-19 17:44:37.056137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.090 [2024-11-19 17:44:37.056158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.090 [2024-11-19 17:44:37.056166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.090 [2024-11-19 17:44:37.056347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.090 [2024-11-19 17:44:37.056510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.090 [2024-11-19 17:44:37.056520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.090 [2024-11-19 17:44:37.056527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.090 [2024-11-19 17:44:37.056533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.068719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.069090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.069107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.069115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.069278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.069440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.069450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.069456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.069463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.081530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.081779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.081795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.081803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.081970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.082135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.082144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.082151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.082157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.094384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.094660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.094677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.094685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.094850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.095020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.095030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.095037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.095043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.107291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.107617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.107634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.107641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.107803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.107972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.107983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.107990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.107997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.120166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.120482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.120500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.120507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.120670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.120833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.120842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.120849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.120856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.133116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.133461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.133478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.133487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.133649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.133812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.133821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.133830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.133837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.146040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.146407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.146415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.146587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.146760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.146769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.146776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.146783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.159143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.159534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.159552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.159561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.159739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.159916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.159926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.159933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.159940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.171952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.172298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.172315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.172323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.172485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.172648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.091 [2024-11-19 17:44:37.172658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.091 [2024-11-19 17:44:37.172664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.091 [2024-11-19 17:44:37.172671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.091 [2024-11-19 17:44:37.184757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.091 [2024-11-19 17:44:37.185091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.091 [2024-11-19 17:44:37.185136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.091 [2024-11-19 17:44:37.185161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.091 [2024-11-19 17:44:37.185696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.091 [2024-11-19 17:44:37.185860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.185869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.185875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.185882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.197658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.198083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.198101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.198109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.198272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.198435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.198444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.198451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.198458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.210546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.210924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.210941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.210956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.211119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.211300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.211310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.211317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.211323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.223375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.223652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.223673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.223680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.223843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.224012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.224022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.224028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.224035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.236287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.236664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.236681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.236688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.236850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.237021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.237031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.237037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.237045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.249381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.249724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.249741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.249749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.249921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.250108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.250118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.250125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.250132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.262323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.262752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.262770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.262778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.262954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.263133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.263144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.263150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.263157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.275122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.275469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.275487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.275494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.275657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.275820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.275829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.275836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.275843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.287934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.288309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.288316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.288478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.288642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.288652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.288659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.288666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.092 [2024-11-19 17:44:37.300847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.092 [2024-11-19 17:44:37.301136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.092 [2024-11-19 17:44:37.301153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.092 [2024-11-19 17:44:37.301160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.092 [2024-11-19 17:44:37.301322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.092 [2024-11-19 17:44:37.301485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.092 [2024-11-19 17:44:37.301494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.092 [2024-11-19 17:44:37.301504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.092 [2024-11-19 17:44:37.301510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.313905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.314201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.314219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.314227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.314398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.314571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.314581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.314588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.314596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.326791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.327194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.327212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.327220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.327382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.327546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.327556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.327562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.327569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.339661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.340047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.340064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.340072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.340235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.340398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.340408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.340415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.340421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.352635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.353050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.353101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.353125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.353706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.353937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.353953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.353962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.353969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.365581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.366000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.366018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.366025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.366188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.366351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.366360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.366367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.366373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.378438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.378851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.378868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.378876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.379045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.379209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.379219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.379226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.379233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.391298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.391627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.391673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.391704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.392139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.392304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.392313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.392320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.353 [2024-11-19 17:44:37.392325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.353 [2024-11-19 17:44:37.404225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.353 [2024-11-19 17:44:37.404657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.353 [2024-11-19 17:44:37.404676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.353 [2024-11-19 17:44:37.404684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.353 [2024-11-19 17:44:37.404856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.353 [2024-11-19 17:44:37.405037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.353 [2024-11-19 17:44:37.405047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.353 [2024-11-19 17:44:37.405054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.405061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.417391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.417738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.417756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.417765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.417942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.418124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.418134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.418142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.418149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.430465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.430890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.430908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.430917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.431096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.431279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.431288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.431294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.431301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 7431.75 IOPS, 29.03 MiB/s [2024-11-19T16:44:37.577Z] [2024-11-19 17:44:37.444553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.444946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.444968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.444976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.445140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.445304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.445313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.445319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.445326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.457569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.457988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.458007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.458014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.458177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.458340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.458349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.458356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.458363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.470361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.470778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.470825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.470849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.471331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.471495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.471504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.471513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.471520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.483270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.483684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.483701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.483709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.483872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.484042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.484052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.484059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.484066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.496148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.496539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.496556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.496563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.496726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.496889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.496898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.496905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.496911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.509016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.509431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.509464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.509489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.510096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.510611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.510620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.510643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.510659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.524017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.524534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.524583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.524607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.525200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.354 [2024-11-19 17:44:37.525507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.354 [2024-11-19 17:44:37.525519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.354 [2024-11-19 17:44:37.525530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.354 [2024-11-19 17:44:37.525539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.354 [2024-11-19 17:44:37.536916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.354 [2024-11-19 17:44:37.537294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.354 [2024-11-19 17:44:37.537339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.354 [2024-11-19 17:44:37.537364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.354 [2024-11-19 17:44:37.537942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.355 [2024-11-19 17:44:37.538489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.355 [2024-11-19 17:44:37.538499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.355 [2024-11-19 17:44:37.538505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.355 [2024-11-19 17:44:37.538512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.355 [2024-11-19 17:44:37.549771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.355 [2024-11-19 17:44:37.550199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.355 [2024-11-19 17:44:37.550217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.355 [2024-11-19 17:44:37.550225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.355 [2024-11-19 17:44:37.550396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.355 [2024-11-19 17:44:37.550572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.355 [2024-11-19 17:44:37.550582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.355 [2024-11-19 17:44:37.550588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.355 [2024-11-19 17:44:37.550595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.355 [2024-11-19 17:44:37.562654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.355 [2024-11-19 17:44:37.563057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.355 [2024-11-19 17:44:37.563111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.355 [2024-11-19 17:44:37.563136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.355 [2024-11-19 17:44:37.563577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.355 [2024-11-19 17:44:37.563741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.355 [2024-11-19 17:44:37.563749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.355 [2024-11-19 17:44:37.563756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.355 [2024-11-19 17:44:37.563762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.615 [2024-11-19 17:44:37.575535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.615 [2024-11-19 17:44:37.575958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.615 [2024-11-19 17:44:37.575976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.615 [2024-11-19 17:44:37.575984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.615 [2024-11-19 17:44:37.576157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.615 [2024-11-19 17:44:37.576329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.615 [2024-11-19 17:44:37.576339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.615 [2024-11-19 17:44:37.576346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.615 [2024-11-19 17:44:37.576353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.615 [2024-11-19 17:44:37.588392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.615 [2024-11-19 17:44:37.588783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.615 [2024-11-19 17:44:37.588800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.615 [2024-11-19 17:44:37.588808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.588977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.589142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.589152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.589158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.589165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.601213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.601628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.601645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.601652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.601818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.601989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.601999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.602006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.602013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.614196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.614648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.614694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.614719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.615313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.615840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.615849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.615855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.615862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.627000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.627421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.627467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.627492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.628089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.628676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.628713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.628720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.628727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.639879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.640329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.640376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.640403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.640902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.641071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.641080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.641091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.641099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.652879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.653196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.653214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.653222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.653386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.653549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.653559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.653567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.653574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.666038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.666381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.666401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.666409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.666587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.666764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.666775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.666782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.666790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.679144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.679569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.679587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.679596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.679774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.679959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.616 [2024-11-19 17:44:37.679970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.616 [2024-11-19 17:44:37.679978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.616 [2024-11-19 17:44:37.679985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.616 [2024-11-19 17:44:37.692338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.616 [2024-11-19 17:44:37.692770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.616 [2024-11-19 17:44:37.692788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.616 [2024-11-19 17:44:37.692797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.616 [2024-11-19 17:44:37.692975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.616 [2024-11-19 17:44:37.693149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.693159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.693166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.693173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.705191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.705584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.705601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.705609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.705773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.705936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.705952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.705960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.705967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.717985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.718400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.718456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.718480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.719063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.719227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.719237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.719244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.719250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.730851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.731268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.731319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.731344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.731896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.732067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.732076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.732082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.732088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.743856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.744285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.744341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.744366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.744945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.745136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.745145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.745152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.745158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.756686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.757096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.757115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.757123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.757295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.757473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.757483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.757489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.757496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.769568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.769965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.769984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.769992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.770158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.770321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.770331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.770337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.770344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.782392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.782733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.782750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.782758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.782920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.783091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.783102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.783108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.783115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.795314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.795736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.617 [2024-11-19 17:44:37.795780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.617 [2024-11-19 17:44:37.795805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.617 [2024-11-19 17:44:37.796291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.617 [2024-11-19 17:44:37.796456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.617 [2024-11-19 17:44:37.796466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.617 [2024-11-19 17:44:37.796472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.617 [2024-11-19 17:44:37.796479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.617 [2024-11-19 17:44:37.808220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.617 [2024-11-19 17:44:37.808645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.618 [2024-11-19 17:44:37.808689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.618 [2024-11-19 17:44:37.808714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.618 [2024-11-19 17:44:37.809157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.618 [2024-11-19 17:44:37.809322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.618 [2024-11-19 17:44:37.809332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.618 [2024-11-19 17:44:37.809342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.618 [2024-11-19 17:44:37.809350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.618 [2024-11-19 17:44:37.821021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.618 [2024-11-19 17:44:37.821438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.618 [2024-11-19 17:44:37.821480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.618 [2024-11-19 17:44:37.821505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.618 [2024-11-19 17:44:37.822042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.618 [2024-11-19 17:44:37.822206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.618 [2024-11-19 17:44:37.822214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.618 [2024-11-19 17:44:37.822221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.618 [2024-11-19 17:44:37.822227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.878 [2024-11-19 17:44:37.834000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.878 [2024-11-19 17:44:37.834431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.878 [2024-11-19 17:44:37.834449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.878 [2024-11-19 17:44:37.834457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.878 [2024-11-19 17:44:37.834635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.878 [2024-11-19 17:44:37.834809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.878 [2024-11-19 17:44:37.834819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.834827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.834834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.846870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.847314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.847359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.847383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.847929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.848100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.848108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.848115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.848121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.859818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.860260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.860305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.860329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.860908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.861436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.861445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.861451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.861458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.872686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.873066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.873111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.873137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.873715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.874315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.874348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.874355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.874362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.885595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.886008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.886053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.886078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.886656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.887252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.887279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.887304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.887310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.898441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.898868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.898930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.898973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.899448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.899613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.899621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.899627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.899633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.911338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.911755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.911772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.911780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.911942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.912113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.912123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.912129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.912136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.924124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.924461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.924479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.924487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.924659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.924831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.924842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.924850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.924857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.937261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.937693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.937711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.937719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.937900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.938087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.938098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.938105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.938113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.950219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.950674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.950721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.879 [2024-11-19 17:44:37.950745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.879 [2024-11-19 17:44:37.951314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.879 [2024-11-19 17:44:37.951479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.879 [2024-11-19 17:44:37.951488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.879 [2024-11-19 17:44:37.951494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.879 [2024-11-19 17:44:37.951501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.879 [2024-11-19 17:44:37.963143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.879 [2024-11-19 17:44:37.963564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.879 [2024-11-19 17:44:37.963611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:37.963636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:37.964234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:37.964461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:37.964471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:37.964478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:37.964485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:37.975945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:37.976359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:37.976376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:37.976384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:37.976545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:37.976709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:37.976719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:37.976729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:37.976735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:37.988788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:37.989134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:37.989151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:37.989160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:37.989322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:37.989485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:37.989495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:37.989501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:37.989508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.001746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.002153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.002171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.002179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.002342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.002505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.002515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.002521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.002528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.014680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.015119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.015165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.015190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.015703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.015867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.015875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.015882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.015888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.027489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.027886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.027904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.027911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.028082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.028246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.028255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.028262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.028269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.040321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.040742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.040759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.040767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.040929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.041099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.041109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.041116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.041122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.053269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.053686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.053732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.053756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.054351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.054856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.054865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.054871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.054878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.066249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.066667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.066689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.066697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.066860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.067031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.067041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.067048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.067055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.079108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.079522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.880 [2024-11-19 17:44:38.079540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.880 [2024-11-19 17:44:38.079547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.880 [2024-11-19 17:44:38.079709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.880 [2024-11-19 17:44:38.079873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.880 [2024-11-19 17:44:38.079883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.880 [2024-11-19 17:44:38.079889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.880 [2024-11-19 17:44:38.079896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:35.880 [2024-11-19 17:44:38.092008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:35.880 [2024-11-19 17:44:38.092441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.881 [2024-11-19 17:44:38.092489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:35.881 [2024-11-19 17:44:38.092513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:35.881 [2024-11-19 17:44:38.092929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:35.881 [2024-11-19 17:44:38.093111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:35.881 [2024-11-19 17:44:38.093121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:35.881 [2024-11-19 17:44:38.093128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:35.881 [2024-11-19 17:44:38.093135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.141 [2024-11-19 17:44:38.104961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.141 [2024-11-19 17:44:38.105373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.141 [2024-11-19 17:44:38.105390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.141 [2024-11-19 17:44:38.105398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.141 [2024-11-19 17:44:38.105564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.141 [2024-11-19 17:44:38.105728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.141 [2024-11-19 17:44:38.105737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.141 [2024-11-19 17:44:38.105744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.141 [2024-11-19 17:44:38.105751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.141 [2024-11-19 17:44:38.117747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.141 [2024-11-19 17:44:38.118087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.141 [2024-11-19 17:44:38.118105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.141 [2024-11-19 17:44:38.118112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.141 [2024-11-19 17:44:38.118276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.141 [2024-11-19 17:44:38.118440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.141 [2024-11-19 17:44:38.118449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.141 [2024-11-19 17:44:38.118456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.141 [2024-11-19 17:44:38.118463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.141 [2024-11-19 17:44:38.130580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.141 [2024-11-19 17:44:38.130927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.141 [2024-11-19 17:44:38.130945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.141 [2024-11-19 17:44:38.130959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.141 [2024-11-19 17:44:38.131122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.141 [2024-11-19 17:44:38.131286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.141 [2024-11-19 17:44:38.131296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.141 [2024-11-19 17:44:38.131302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.141 [2024-11-19 17:44:38.131309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.141 [2024-11-19 17:44:38.143404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.141 [2024-11-19 17:44:38.143830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.141 [2024-11-19 17:44:38.143874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.141 [2024-11-19 17:44:38.143898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.141 [2024-11-19 17:44:38.144490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.141 [2024-11-19 17:44:38.144980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.141 [2024-11-19 17:44:38.144991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.141 [2024-11-19 17:44:38.145001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.141 [2024-11-19 17:44:38.145009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.141 [2024-11-19 17:44:38.156258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.141 [2024-11-19 17:44:38.156660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.141 [2024-11-19 17:44:38.156679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.141 [2024-11-19 17:44:38.156687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.141 [2024-11-19 17:44:38.156876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.141 [2024-11-19 17:44:38.157054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.141 [2024-11-19 17:44:38.157065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.141 [2024-11-19 17:44:38.157071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.141 [2024-11-19 17:44:38.157079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.141 [2024-11-19 17:44:38.169183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.141 [2024-11-19 17:44:38.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.141 [2024-11-19 17:44:38.169690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.141 [2024-11-19 17:44:38.169715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.141 [2024-11-19 17:44:38.170196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.141 [2024-11-19 17:44:38.170361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.170371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.170378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.170384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.182005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.182344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.182371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.182543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.182715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.182725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.182733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.182740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.195083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.195445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.195463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.195471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.195649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.195826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.195837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.195845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.195852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.207918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.208389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.208413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.208896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.209090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.209100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.209107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.209114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.220836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.221101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.221147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.221171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.221750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.222037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.222047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.222054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.222061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.233673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.234017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.234037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.234045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.234208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.234371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.234380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.234387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.234394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.246478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.246893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.246911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.246919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.247098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.247272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.247282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.247288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.247295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.259297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.259734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.259743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.259906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.260086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.260096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.260103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.142 [2024-11-19 17:44:38.260110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.142 [2024-11-19 17:44:38.272113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.142 [2024-11-19 17:44:38.272464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.142 [2024-11-19 17:44:38.272482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.142 [2024-11-19 17:44:38.272489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.142 [2024-11-19 17:44:38.272655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.142 [2024-11-19 17:44:38.272818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.142 [2024-11-19 17:44:38.272828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.142 [2024-11-19 17:44:38.272834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.272841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.143 [2024-11-19 17:44:38.284910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.143 [2024-11-19 17:44:38.285274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.143 [2024-11-19 17:44:38.285320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.143 [2024-11-19 17:44:38.285344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.143 [2024-11-19 17:44:38.285923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.143 [2024-11-19 17:44:38.286375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.143 [2024-11-19 17:44:38.286384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.143 [2024-11-19 17:44:38.286391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.286398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.143 [2024-11-19 17:44:38.297785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.143 [2024-11-19 17:44:38.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.143 [2024-11-19 17:44:38.298261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.143 [2024-11-19 17:44:38.298285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.143 [2024-11-19 17:44:38.298864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.143 [2024-11-19 17:44:38.299343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.143 [2024-11-19 17:44:38.299352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.143 [2024-11-19 17:44:38.299358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.299365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.143 [2024-11-19 17:44:38.310660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.143 [2024-11-19 17:44:38.311053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.143 [2024-11-19 17:44:38.311071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.143 [2024-11-19 17:44:38.311078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.143 [2024-11-19 17:44:38.311242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.143 [2024-11-19 17:44:38.311406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.143 [2024-11-19 17:44:38.311415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.143 [2024-11-19 17:44:38.311426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.311432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.143 [2024-11-19 17:44:38.323485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.143 [2024-11-19 17:44:38.323895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.143 [2024-11-19 17:44:38.323913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.143 [2024-11-19 17:44:38.323920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.143 [2024-11-19 17:44:38.324090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.143 [2024-11-19 17:44:38.324255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.143 [2024-11-19 17:44:38.324264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.143 [2024-11-19 17:44:38.324271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.324277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.143 [2024-11-19 17:44:38.336343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.143 [2024-11-19 17:44:38.336621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.143 [2024-11-19 17:44:38.336639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.143 [2024-11-19 17:44:38.336647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.143 [2024-11-19 17:44:38.336809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.143 [2024-11-19 17:44:38.336978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.143 [2024-11-19 17:44:38.336989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.143 [2024-11-19 17:44:38.336995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.337002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.143 [2024-11-19 17:44:38.349234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.143 [2024-11-19 17:44:38.349674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.143 [2024-11-19 17:44:38.349719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.143 [2024-11-19 17:44:38.349743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.143 [2024-11-19 17:44:38.350243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.143 [2024-11-19 17:44:38.350429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.143 [2024-11-19 17:44:38.350439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.143 [2024-11-19 17:44:38.350446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.143 [2024-11-19 17:44:38.350454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.362369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.362669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.404 [2024-11-19 17:44:38.362688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.404 [2024-11-19 17:44:38.362696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.404 [2024-11-19 17:44:38.362868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.404 [2024-11-19 17:44:38.363049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.404 [2024-11-19 17:44:38.363059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.404 [2024-11-19 17:44:38.363066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.404 [2024-11-19 17:44:38.363074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.375234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.375508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.404 [2024-11-19 17:44:38.375527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.404 [2024-11-19 17:44:38.375537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.404 [2024-11-19 17:44:38.375702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.404 [2024-11-19 17:44:38.375865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.404 [2024-11-19 17:44:38.375874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.404 [2024-11-19 17:44:38.375881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.404 [2024-11-19 17:44:38.375888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.388066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.388337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.404 [2024-11-19 17:44:38.388355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.404 [2024-11-19 17:44:38.388363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.404 [2024-11-19 17:44:38.388525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.404 [2024-11-19 17:44:38.388688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.404 [2024-11-19 17:44:38.388697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.404 [2024-11-19 17:44:38.388703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.404 [2024-11-19 17:44:38.388710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.401142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.401484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.404 [2024-11-19 17:44:38.401506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.404 [2024-11-19 17:44:38.401514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.404 [2024-11-19 17:44:38.401691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.404 [2024-11-19 17:44:38.401870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.404 [2024-11-19 17:44:38.401880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.404 [2024-11-19 17:44:38.401887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.404 [2024-11-19 17:44:38.401895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.414034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.414384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.404 [2024-11-19 17:44:38.414429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.404 [2024-11-19 17:44:38.414453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.404 [2024-11-19 17:44:38.415047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.404 [2024-11-19 17:44:38.415632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.404 [2024-11-19 17:44:38.415660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.404 [2024-11-19 17:44:38.415667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.404 [2024-11-19 17:44:38.415674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.426830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.427155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.404 [2024-11-19 17:44:38.427172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.404 [2024-11-19 17:44:38.427180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.404 [2024-11-19 17:44:38.427342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.404 [2024-11-19 17:44:38.427505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.404 [2024-11-19 17:44:38.427514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.404 [2024-11-19 17:44:38.427520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.404 [2024-11-19 17:44:38.427527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.404 [2024-11-19 17:44:38.439760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.404 [2024-11-19 17:44:38.440103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.440142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.440169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.440756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.440929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.440939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.440945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.440959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 5945.40 IOPS, 23.22 MiB/s [2024-11-19T16:44:38.628Z] [2024-11-19 17:44:38.452916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.453225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.453243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.453251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.453428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.453606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.453616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.453623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.453630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.465886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.466183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.466201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.466209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.466381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.466554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.466564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.466571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.466578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.478769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.479056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.479074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.479081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.479244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.479408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.479421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.479428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.479434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.491654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.492002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.492021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.492028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.492191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.492354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.492364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.492371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.492377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.504537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.505374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.505397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.505406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.505575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.505739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.505749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.505756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.505763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.517445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.517779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.517797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.517805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.517973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.518138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.518147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.518154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.518161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.530236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.530573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.530591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.530598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.530761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.530924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.530933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.530939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.530946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.543018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.543343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.543359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.543367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.543530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.543694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.543703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.543710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.543716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.405 [2024-11-19 17:44:38.555887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.405 [2024-11-19 17:44:38.556166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.405 [2024-11-19 17:44:38.556184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.405 [2024-11-19 17:44:38.556192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.405 [2024-11-19 17:44:38.556354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.405 [2024-11-19 17:44:38.556517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.405 [2024-11-19 17:44:38.556527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.405 [2024-11-19 17:44:38.556534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.405 [2024-11-19 17:44:38.556542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.406 [2024-11-19 17:44:38.568743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.406 [2024-11-19 17:44:38.569121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.406 [2024-11-19 17:44:38.569142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.406 [2024-11-19 17:44:38.569150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.406 [2024-11-19 17:44:38.569312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.406 [2024-11-19 17:44:38.569475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.406 [2024-11-19 17:44:38.569485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.406 [2024-11-19 17:44:38.569491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.406 [2024-11-19 17:44:38.569498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.406 [2024-11-19 17:44:38.581562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.406 [2024-11-19 17:44:38.581971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.406 [2024-11-19 17:44:38.582017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.406 [2024-11-19 17:44:38.582042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.406 [2024-11-19 17:44:38.582620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.406 [2024-11-19 17:44:38.583108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.406 [2024-11-19 17:44:38.583118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.406 [2024-11-19 17:44:38.583124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.406 [2024-11-19 17:44:38.583131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.406 [2024-11-19 17:44:38.594430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.406 [2024-11-19 17:44:38.594718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.406 [2024-11-19 17:44:38.594763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.406 [2024-11-19 17:44:38.594787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.406 [2024-11-19 17:44:38.595380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.406 [2024-11-19 17:44:38.595975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.406 [2024-11-19 17:44:38.596002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.406 [2024-11-19 17:44:38.596024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.406 [2024-11-19 17:44:38.596044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.406 [2024-11-19 17:44:38.607231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.406 [2024-11-19 17:44:38.607608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.406 [2024-11-19 17:44:38.607653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.406 [2024-11-19 17:44:38.607677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.406 [2024-11-19 17:44:38.608159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.406 [2024-11-19 17:44:38.608324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.406 [2024-11-19 17:44:38.608333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.406 [2024-11-19 17:44:38.608340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.406 [2024-11-19 17:44:38.608346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.406 [2024-11-19 17:44:38.620240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.406 [2024-11-19 17:44:38.620576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.406 [2024-11-19 17:44:38.620593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.406 [2024-11-19 17:44:38.620601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.406 [2024-11-19 17:44:38.620773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.406 [2024-11-19 17:44:38.620952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.406 [2024-11-19 17:44:38.620962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.406 [2024-11-19 17:44:38.620969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.406 [2024-11-19 17:44:38.620977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.633428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.633791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.633809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.633818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.634001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.634180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.634190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.634197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.634204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.646543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.646956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.646975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.646983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.647160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.647339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.647353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.647361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.647368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.659697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.660133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.660152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.660160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.660337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.660515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.660525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.660532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.660539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.672867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.673288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.673307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.673315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.673492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.673670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.673680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.673687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.673694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.686133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.686476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.686494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.686503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.686685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.686870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.686880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.686887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.686894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.699176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.699513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.699531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.699539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.699716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.699894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.699904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.699912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.699920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.712215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.712571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.712589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.712597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.712774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.712958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.712968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.712976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.712983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.725329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.667 [2024-11-19 17:44:38.725609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.667 [2024-11-19 17:44:38.725628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.667 [2024-11-19 17:44:38.725636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.667 [2024-11-19 17:44:38.725813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.667 [2024-11-19 17:44:38.725999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.667 [2024-11-19 17:44:38.726010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.667 [2024-11-19 17:44:38.726017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.667 [2024-11-19 17:44:38.726025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.667 [2024-11-19 17:44:38.738531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.738914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.738936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.738945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.739128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.739307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.739317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.739325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.739333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.751661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.751984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.752011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.752187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.752366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.752376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.752384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.752390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.764742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.765112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.765130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.765138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.765315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.765493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.765503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.765510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.765517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.777844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.778203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.778221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.778230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.778410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.778588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.778598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.778605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.778612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.790909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.791344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.791362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.791370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.791532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.791696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.791705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.791711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.791718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.803772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.804197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.804235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.804261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.804812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.804981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.804991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.804998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.805005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.816747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.817167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.817212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.817236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.817638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.817802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.817811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.817821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.817827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.829582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.829985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.830002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.830010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.830172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.830335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.830344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.830351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.830358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.842425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.842853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.842898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.842921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.843436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.843600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.668 [2024-11-19 17:44:38.843610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.668 [2024-11-19 17:44:38.843616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.668 [2024-11-19 17:44:38.843623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.668 [2024-11-19 17:44:38.855223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.668 [2024-11-19 17:44:38.855635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.668 [2024-11-19 17:44:38.855652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.668 [2024-11-19 17:44:38.855660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.668 [2024-11-19 17:44:38.855822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.668 [2024-11-19 17:44:38.855991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.669 [2024-11-19 17:44:38.856001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.669 [2024-11-19 17:44:38.856008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.669 [2024-11-19 17:44:38.856014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.669 [2024-11-19 17:44:38.868017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.669 [2024-11-19 17:44:38.868366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.669 [2024-11-19 17:44:38.868383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.669 [2024-11-19 17:44:38.868390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.669 [2024-11-19 17:44:38.868553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.669 [2024-11-19 17:44:38.868716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.669 [2024-11-19 17:44:38.868726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.669 [2024-11-19 17:44:38.868732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.669 [2024-11-19 17:44:38.868739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.669 [2024-11-19 17:44:38.881018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.669 [2024-11-19 17:44:38.881378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.669 [2024-11-19 17:44:38.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.669 [2024-11-19 17:44:38.881403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.669 [2024-11-19 17:44:38.881576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.669 [2024-11-19 17:44:38.881748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.669 [2024-11-19 17:44:38.881758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.669 [2024-11-19 17:44:38.881765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.669 [2024-11-19 17:44:38.881772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.929 [2024-11-19 17:44:38.894190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.929 [2024-11-19 17:44:38.894598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.929 [2024-11-19 17:44:38.894645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.929 [2024-11-19 17:44:38.894670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.929 [2024-11-19 17:44:38.895267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.895849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.895859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.895866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.895873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.907008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.907440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.907493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.907518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.907924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.908094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.908104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.908111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.908118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.919806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.920221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.920239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.920247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.920409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.920572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.920582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.920589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.920598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.932647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.933065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.933083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.933091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.933253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.933417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.933426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.933433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.933440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.945507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.945903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.945920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.945928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.946101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.946265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.946275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.946282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.946289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.958337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.958747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.958764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.958772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.958943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.959121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.959132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.959139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.959146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.971485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.971917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.971936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.971945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.972128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.972306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.972316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.972323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.972330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.984376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.984768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.984785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.984792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.930 [2024-11-19 17:44:38.984963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.930 [2024-11-19 17:44:38.985153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.930 [2024-11-19 17:44:38.985163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.930 [2024-11-19 17:44:38.985174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.930 [2024-11-19 17:44:38.985180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.930 [2024-11-19 17:44:38.997329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.930 [2024-11-19 17:44:38.997756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.930 [2024-11-19 17:44:38.997774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.930 [2024-11-19 17:44:38.997781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:38.997960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:38.998134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:38.998144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:38.998151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:38.998158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.010238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.010561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.010578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.010586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.010747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.010911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.010920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.010927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.010935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.023086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.023505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.023558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.023582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.024101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.024266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.024275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.024282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.024289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.035887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.036286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.036304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.036311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.036472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.036636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.036645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.036652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.036659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.048719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.049114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.049132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.049139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.049302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.049465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.049475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.049482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.049488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.061645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.062070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.062115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.062139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.062717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.063017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.063027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.063033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.063040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.074572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.074911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.074931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.074939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.075108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.075272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.075281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.075288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.075294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.087500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.087923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.087940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.931 [2024-11-19 17:44:39.087953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.931 [2024-11-19 17:44:39.088116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.931 [2024-11-19 17:44:39.088280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.931 [2024-11-19 17:44:39.088290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.931 [2024-11-19 17:44:39.088296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.931 [2024-11-19 17:44:39.088303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.931 [2024-11-19 17:44:39.100355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.931 [2024-11-19 17:44:39.100776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.931 [2024-11-19 17:44:39.100793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.932 [2024-11-19 17:44:39.100800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.932 [2024-11-19 17:44:39.100968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.932 [2024-11-19 17:44:39.101132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.932 [2024-11-19 17:44:39.101141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.932 [2024-11-19 17:44:39.101148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.932 [2024-11-19 17:44:39.101155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.932 [2024-11-19 17:44:39.113206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.932 [2024-11-19 17:44:39.113626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.932 [2024-11-19 17:44:39.113643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.932 [2024-11-19 17:44:39.113650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.932 [2024-11-19 17:44:39.113820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.932 [2024-11-19 17:44:39.113989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.932 [2024-11-19 17:44:39.113999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.932 [2024-11-19 17:44:39.114006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.932 [2024-11-19 17:44:39.114013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3608528 Killed "${NVMF_APP[@]}" "$@" 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.932 [2024-11-19 17:44:39.126339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.932 [2024-11-19 17:44:39.126763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.932 [2024-11-19 17:44:39.126781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.932 [2024-11-19 17:44:39.126790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3609933 00:26:36.932 [2024-11-19 17:44:39.126971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.932 [2024-11-19 17:44:39.127150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.932 [2024-11-19 17:44:39.127161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.932 [2024-11-19 17:44:39.127170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.932 [2024-11-19 17:44:39.127177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3609933 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3609933 ']' 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.932 17:44:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.932 [2024-11-19 17:44:39.139511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:36.932 [2024-11-19 17:44:39.139940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.932 [2024-11-19 17:44:39.139962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:36.932 [2024-11-19 17:44:39.139975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:36.932 [2024-11-19 17:44:39.140154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:36.932 [2024-11-19 17:44:39.140332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:36.932 [2024-11-19 17:44:39.140342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:36.932 [2024-11-19 17:44:39.140349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:36.932 [2024-11-19 17:44:39.140356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.192 [2024-11-19 17:44:39.152683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.192 [2024-11-19 17:44:39.153045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.192 [2024-11-19 17:44:39.153064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.192 [2024-11-19 17:44:39.153073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.192 [2024-11-19 17:44:39.153250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.192 [2024-11-19 17:44:39.153428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.192 [2024-11-19 17:44:39.153437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.192 [2024-11-19 17:44:39.153444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.192 [2024-11-19 17:44:39.153451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.192 [2024-11-19 17:44:39.165739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.192 [2024-11-19 17:44:39.166158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.192 [2024-11-19 17:44:39.166177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.192 [2024-11-19 17:44:39.166185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.192 [2024-11-19 17:44:39.166358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.192 [2024-11-19 17:44:39.166530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.192 [2024-11-19 17:44:39.166540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.192 [2024-11-19 17:44:39.166547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.192 [2024-11-19 17:44:39.166554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.192 [2024-11-19 17:44:39.175737] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:37.192 [2024-11-19 17:44:39.175779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.192 [2024-11-19 17:44:39.178687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.192 [2024-11-19 17:44:39.179095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.192 [2024-11-19 17:44:39.179113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.192 [2024-11-19 17:44:39.179124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.192 [2024-11-19 17:44:39.179298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.192 [2024-11-19 17:44:39.179471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.192 [2024-11-19 17:44:39.179480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.192 [2024-11-19 17:44:39.179488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.192 [2024-11-19 17:44:39.179496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.192 [2024-11-19 17:44:39.191658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.192 [2024-11-19 17:44:39.192091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.192 [2024-11-19 17:44:39.192110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.192 [2024-11-19 17:44:39.192118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.192 [2024-11-19 17:44:39.192292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.192 [2024-11-19 17:44:39.192464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.192 [2024-11-19 17:44:39.192473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.192 [2024-11-19 17:44:39.192480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.192 [2024-11-19 17:44:39.192487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.192 [2024-11-19 17:44:39.204699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.192 [2024-11-19 17:44:39.205051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.192 [2024-11-19 17:44:39.205069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.192 [2024-11-19 17:44:39.205077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.192 [2024-11-19 17:44:39.205251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.205425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.205434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.205441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.205448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.217771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.218207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.218226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.218235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.218412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.218594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.218604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.218613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.218620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.230967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.231395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.231412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.231421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.231598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.231776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.231785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.231792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.231800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.244160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.244598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.244616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.244625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.244803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.244986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.244996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.245003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.245011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.257215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.257571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.257588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.257596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.257769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.257941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.257956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.257968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.257975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.259643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:37.193 [2024-11-19 17:44:39.270228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.270684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.270704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.270713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.270888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.271069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.271080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.271088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.271096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.283260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.283686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.283704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.283712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.283886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.284066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.284077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.284085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.284093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.296356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.296783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.296801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.296809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.296987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.297161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.297171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.297178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.297185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.302140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.193 [2024-11-19 17:44:39.302177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.193 [2024-11-19 17:44:39.302185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.193 [2024-11-19 17:44:39.302191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.193 [2024-11-19 17:44:39.302196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.193 [2024-11-19 17:44:39.303491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.193 [2024-11-19 17:44:39.303604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.193 [2024-11-19 17:44:39.303605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.193 [2024-11-19 17:44:39.309499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.309960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.309981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.309991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.310171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.310353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.310363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.310371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.310379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.322547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.323017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.323039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.323049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.323229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.323409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.323418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.323427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.323436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.335763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.336238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.336261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.336270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.336450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.336637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.336647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.336656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.336664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.348839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.349165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.349188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.349198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.349375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.349556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.349566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.349574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.349583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.361919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.362383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.362405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.362414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.362594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.362774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.362784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.362793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.362801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.375115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.375554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.375573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.375583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.375760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.375940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.375955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.375968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.375976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.388301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.388742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.388761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.388769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.388950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.389129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.389138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.389146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.389153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.193 [2024-11-19 17:44:39.401468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.193 [2024-11-19 17:44:39.401906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.193 [2024-11-19 17:44:39.401924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.193 [2024-11-19 17:44:39.401932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.193 [2024-11-19 17:44:39.402113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.193 [2024-11-19 17:44:39.402293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.193 [2024-11-19 17:44:39.402303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.193 [2024-11-19 17:44:39.402310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.193 [2024-11-19 17:44:39.402318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.453 [2024-11-19 17:44:39.414626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.453 [2024-11-19 17:44:39.415089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.453 [2024-11-19 17:44:39.415110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.453 [2024-11-19 17:44:39.415118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.453 [2024-11-19 17:44:39.415296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.453 [2024-11-19 17:44:39.415474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.453 [2024-11-19 17:44:39.415485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.453 [2024-11-19 17:44:39.415492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.453 [2024-11-19 17:44:39.415499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.453 [2024-11-19 17:44:39.427817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.453 [2024-11-19 17:44:39.428226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.453 [2024-11-19 17:44:39.428244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.453 [2024-11-19 17:44:39.428252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.453 [2024-11-19 17:44:39.428430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.453 [2024-11-19 17:44:39.428608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.428618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.428625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.428632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.440955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.441392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.441409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.441418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.441595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.441773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.441783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.441790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.441798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 4954.50 IOPS, 19.35 MiB/s [2024-11-19T16:44:39.677Z] [2024-11-19 17:44:39.454082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.454511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.454529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.454537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.454715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.454894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.454904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.454911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.454919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.467234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.467642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.467660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.467673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.467851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.468036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.468047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.468055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.468063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.480369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.480770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.480788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.480796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.480977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.481155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.481165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.481173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.481179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.493484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.493889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.493908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.493916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.494098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.494277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.494287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.494295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.494302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.506608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.507057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.507075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.507084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.507261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.507442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.507452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.507459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.507466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.519774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.520140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.520158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.520166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.520344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.520522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.520532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.520539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.520546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.532855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.533266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.533284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.533293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.533469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.533647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.533656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.533664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.533670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.546001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.546436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.546454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.546462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.454 [2024-11-19 17:44:39.546638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.454 [2024-11-19 17:44:39.546817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.454 [2024-11-19 17:44:39.546826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.454 [2024-11-19 17:44:39.546837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.454 [2024-11-19 17:44:39.546845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.454 [2024-11-19 17:44:39.559146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.454 [2024-11-19 17:44:39.559580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.454 [2024-11-19 17:44:39.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.454 [2024-11-19 17:44:39.559605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.559783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.559967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.559977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.559985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.559993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.572309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.572741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.572759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.572767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.572944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.573127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.573137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.573144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.573151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.585457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.585886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.585904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.585912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.586094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.586273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.586282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.586289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.586296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.598610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.599062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.599080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.599089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.599274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.599448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.599458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.599464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.599471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.611770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.612202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.612220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.612229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.612407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.612585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.612595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.612602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.612609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.624916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.625254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.625272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.625280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.625456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.625634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.625644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.625652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.625659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.637965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.638392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.638410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.638421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.638599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.638777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.638787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.638794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.638801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.651117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.651535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.651552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.651560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.651737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.651916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.651927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.651934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.651941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.455 [2024-11-19 17:44:39.664252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.455 [2024-11-19 17:44:39.664602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.455 [2024-11-19 17:44:39.664619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.455 [2024-11-19 17:44:39.664627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.455 [2024-11-19 17:44:39.664804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.455 [2024-11-19 17:44:39.664988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.455 [2024-11-19 17:44:39.664998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.455 [2024-11-19 17:44:39.665006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.455 [2024-11-19 17:44:39.665014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.715 [2024-11-19 17:44:39.677319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.715 [2024-11-19 17:44:39.677800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.715 [2024-11-19 17:44:39.677818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.715 [2024-11-19 17:44:39.677827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.678008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.678191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.678202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.678209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.678217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.690352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.690784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.690802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.690811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.690993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.691171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.691181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.691189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.691196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.703499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.703850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.703867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.703875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.704057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.704234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.704244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.704251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.704258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.716569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.717002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.717019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.717028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.717205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.717385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.717394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.717409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.717417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.729733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.730087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.730105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.730113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.730290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.730469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.730477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.730485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.730492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.742808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.743245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.743263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.743271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.743448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.743625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.743634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.743642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.743649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.755961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.756372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.756388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.756396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.756573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.756751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.756760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.756767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.756773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.769103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.769463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.769480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.769488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.769665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.769843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.769851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.769858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.769864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.782193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.782634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.782651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.782659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.782835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.783019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.783029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.783036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.783042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.795363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.795793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.716 [2024-11-19 17:44:39.795810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.716 [2024-11-19 17:44:39.795818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.716 [2024-11-19 17:44:39.795998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.716 [2024-11-19 17:44:39.796177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.716 [2024-11-19 17:44:39.796186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.716 [2024-11-19 17:44:39.796194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.716 [2024-11-19 17:44:39.796201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.716 [2024-11-19 17:44:39.808529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.716 [2024-11-19 17:44:39.808826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.808844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.808856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.809040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.809219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.809228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.809234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.809242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.821723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.822137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.822157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.822165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.822343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.822521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.822531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.822539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.822546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.834900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.835200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.835218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.835226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.835402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.835580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.835589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.835597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.835603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.848100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.848438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.848456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.848464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.848642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.848822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.848831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.848838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.848845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.861182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.861544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.861561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.861569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.861746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.861923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.861932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.861939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.861946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.874280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.874704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.874722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.874729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.874905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.875090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.875099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.875107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.875113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.887445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.887747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.887765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.887773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.887955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.888133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.888142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.888152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.888159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.900509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.900860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.900878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.900886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.901070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.901248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.901257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.901264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.901271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.913593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.913885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.913902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.913910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.914091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.914270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.914279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.914286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.914293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.717 [2024-11-19 17:44:39.926777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.717 [2024-11-19 17:44:39.927134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.717 [2024-11-19 17:44:39.927152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.717 [2024-11-19 17:44:39.927160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.717 [2024-11-19 17:44:39.927336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.717 [2024-11-19 17:44:39.927514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.717 [2024-11-19 17:44:39.927523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.717 [2024-11-19 17:44:39.927530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.717 [2024-11-19 17:44:39.927537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.978 [2024-11-19 17:44:39.939858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.978 [2024-11-19 17:44:39.940251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.978 [2024-11-19 17:44:39.940268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.978 [2024-11-19 17:44:39.940277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.978 [2024-11-19 17:44:39.940455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.978 [2024-11-19 17:44:39.940634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.978 [2024-11-19 17:44:39.940644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.978 [2024-11-19 17:44:39.940651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.978 [2024-11-19 17:44:39.940658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.978 [2024-11-19 17:44:39.953001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.978 [2024-11-19 17:44:39.953361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.978 [2024-11-19 17:44:39.953379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.978 [2024-11-19 17:44:39.953386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.978 [2024-11-19 17:44:39.953563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.978 [2024-11-19 17:44:39.953742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.978 [2024-11-19 17:44:39.953751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.978 [2024-11-19 17:44:39.953758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.978 [2024-11-19 17:44:39.953764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.978 [2024-11-19 17:44:39.966110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.978 [2024-11-19 17:44:39.966451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.978 [2024-11-19 17:44:39.966468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.978 [2024-11-19 17:44:39.966476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.978 [2024-11-19 17:44:39.966652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.978 [2024-11-19 17:44:39.966834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.978 [2024-11-19 17:44:39.966843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.978 [2024-11-19 17:44:39.966851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.978 [2024-11-19 17:44:39.966857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.978 [2024-11-19 17:44:39.979187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.978 [2024-11-19 17:44:39.979471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.978 [2024-11-19 17:44:39.979489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.978 [2024-11-19 17:44:39.979501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.978 [2024-11-19 17:44:39.979678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.978 [2024-11-19 17:44:39.979856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.978 [2024-11-19 17:44:39.979866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.978 [2024-11-19 17:44:39.979873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.978 [2024-11-19 17:44:39.979879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.978 [2024-11-19 17:44:39.992247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.978 [2024-11-19 17:44:39.992660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.978 [2024-11-19 17:44:39.992678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.978 [2024-11-19 17:44:39.992686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.978 [2024-11-19 17:44:39.992863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.978 [2024-11-19 17:44:39.993047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:39.993057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:39.993064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:39.993070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 [2024-11-19 17:44:40.005390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.005827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.005845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.005854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.006041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.006222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.006231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.006238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.006244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 [2024-11-19 17:44:40.018769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.019171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.019242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.019256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.019458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.019664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.019673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.019683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.019692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.979 [2024-11-19 17:44:40.031846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.979 [2024-11-19 17:44:40.032135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.032153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.032161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.979 [2024-11-19 17:44:40.032337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.032516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.032525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.032533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.032540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 [2024-11-19 17:44:40.045043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.045329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.045346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.045354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.045531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.045709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.045718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.045725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.045731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 [2024-11-19 17:44:40.058326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.058672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.058689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.058697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.058878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.059062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.059071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.059078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.059085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.979 [2024-11-19 17:44:40.071405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.071862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.071880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.071888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.072069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.072248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.072257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.072264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.072270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 [2024-11-19 17:44:40.074282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.979 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.979 [2024-11-19 17:44:40.084594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.085009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.085027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.085034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.085211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.085389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.085398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.085405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.085415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.979 [2024-11-19 17:44:40.097744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.979 [2024-11-19 17:44:40.098140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.979 [2024-11-19 17:44:40.098158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.979 [2024-11-19 17:44:40.098166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.979 [2024-11-19 17:44:40.098343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.979 [2024-11-19 17:44:40.098521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.979 [2024-11-19 17:44:40.098530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.979 [2024-11-19 17:44:40.098538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.979 [2024-11-19 17:44:40.098544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.980 [2024-11-19 17:44:40.110891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.980 [2024-11-19 17:44:40.111271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.980 [2024-11-19 17:44:40.111289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.980 [2024-11-19 17:44:40.111297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.980 [2024-11-19 17:44:40.111474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.980 [2024-11-19 17:44:40.111652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.980 [2024-11-19 17:44:40.111661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.980 [2024-11-19 17:44:40.111668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.980 [2024-11-19 17:44:40.111674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.980 Malloc0 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.980 [2024-11-19 17:44:40.124006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.980 [2024-11-19 17:44:40.124345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.980 [2024-11-19 17:44:40.124362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.980 [2024-11-19 17:44:40.124370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.980 [2024-11-19 17:44:40.124546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.980 [2024-11-19 17:44:40.124724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.980 [2024-11-19 17:44:40.124733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.980 [2024-11-19 17:44:40.124744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.980 [2024-11-19 17:44:40.124751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.980 [2024-11-19 17:44:40.137081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:37.980 [2024-11-19 17:44:40.137373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.980 [2024-11-19 17:44:40.137390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128d500 with addr=10.0.0.2, port=4420 00:26:37.980 [2024-11-19 17:44:40.137398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d500 is same with the state(6) to be set 00:26:37.980 [2024-11-19 17:44:40.137575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128d500 (9): Bad file descriptor 00:26:37.980 [2024-11-19 17:44:40.137753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:37.980 [2024-11-19 17:44:40.137762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:37.980 [2024-11-19 17:44:40.137769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:37.980 [2024-11-19 17:44:40.137776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.980 [2024-11-19 17:44:40.142503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.980 17:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3609001 00:26:37.980 [2024-11-19 17:44:40.150117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:38.238 [2024-11-19 17:44:40.259611] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:39.614 4546.86 IOPS, 17.76 MiB/s [2024-11-19T16:44:42.775Z] 5371.88 IOPS, 20.98 MiB/s [2024-11-19T16:44:43.712Z] 6014.67 IOPS, 23.49 MiB/s [2024-11-19T16:44:44.650Z] 6535.00 IOPS, 25.53 MiB/s [2024-11-19T16:44:45.587Z] 6961.36 IOPS, 27.19 MiB/s [2024-11-19T16:44:46.526Z] 7312.92 IOPS, 28.57 MiB/s [2024-11-19T16:44:47.904Z] 7597.77 IOPS, 29.68 MiB/s [2024-11-19T16:44:48.843Z] 7855.43 IOPS, 30.69 MiB/s 00:26:46.620 Latency(us) 00:26:46.620 [2024-11-19T16:44:48.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.620 Verification LBA range: start 0x0 length 0x4000 00:26:46.620 Nvme1n1 : 15.01 8068.28 31.52 13221.16 0.00 5992.64 648.24 13620.09 00:26:46.620 [2024-11-19T16:44:48.843Z] =================================================================================================================== 00:26:46.620 [2024-11-19T16:44:48.843Z] Total : 8068.28 31.52 13221.16 0.00 5992.64 648.24 13620.09 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.620 rmmod nvme_tcp 00:26:46.620 rmmod nvme_fabrics 00:26:46.620 rmmod nvme_keyring 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3609933 ']' 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3609933 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3609933 ']' 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3609933 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609933 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609933' 00:26:46.620 killing process with pid 3609933 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3609933 00:26:46.620 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3609933 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.880 17:44:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:49.415 00:26:49.415 real 0m26.080s 00:26:49.415 user 1m0.812s 00:26:49.415 sys 0m6.829s 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:49.415 ************************************ 00:26:49.415 END TEST nvmf_bdevperf 00:26:49.415 ************************************ 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.415 ************************************ 00:26:49.415 START TEST nvmf_target_disconnect 00:26:49.415 ************************************ 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:49.415 * Looking for test storage... 00:26:49.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:49.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.415 --rc genhtml_branch_coverage=1 00:26:49.415 --rc genhtml_function_coverage=1 00:26:49.415 --rc genhtml_legend=1 00:26:49.415 --rc geninfo_all_blocks=1 00:26:49.415 --rc geninfo_unexecuted_blocks=1 00:26:49.415 00:26:49.415 ' 00:26:49.415 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:49.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.415 --rc genhtml_branch_coverage=1 00:26:49.415 --rc genhtml_function_coverage=1 00:26:49.415 --rc genhtml_legend=1 00:26:49.416 --rc geninfo_all_blocks=1 00:26:49.416 --rc geninfo_unexecuted_blocks=1 00:26:49.416 00:26:49.416 ' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:49.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.416 --rc genhtml_branch_coverage=1 00:26:49.416 --rc genhtml_function_coverage=1 00:26:49.416 --rc genhtml_legend=1 00:26:49.416 --rc geninfo_all_blocks=1 00:26:49.416 --rc geninfo_unexecuted_blocks=1 00:26:49.416 00:26:49.416 ' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:49.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.416 --rc genhtml_branch_coverage=1 00:26:49.416 --rc genhtml_function_coverage=1 00:26:49.416 --rc genhtml_legend=1 00:26:49.416 --rc geninfo_all_blocks=1 00:26:49.416 --rc geninfo_unexecuted_blocks=1 00:26:49.416 00:26:49.416 ' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.416 17:44:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.991 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:55.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:55.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:55.992 Found net devices under 0000:86:00.0: cvl_0_0 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:55.992 Found net devices under 0000:86:00.1: cvl_0_1 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.992 17:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:55.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:26:55.992 00:26:55.992 --- 10.0.0.2 ping statistics --- 00:26:55.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.992 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:55.992 00:26:55.992 --- 10.0.0.1 ping statistics --- 00:26:55.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.992 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:55.992 ************************************ 00:26:55.992 START TEST nvmf_target_disconnect_tc1 00:26:55.992 ************************************ 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.992 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.993 [2024-11-19 17:44:57.386786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.993 [2024-11-19 17:44:57.386831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138aab0 with addr=10.0.0.2, port=4420 00:26:55.993 [2024-11-19 17:44:57.386849] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:55.993 [2024-11-19 17:44:57.386858] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:55.993 [2024-11-19 17:44:57.386865] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:55.993 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:55.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:55.993 Initializing NVMe Controllers 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:55.993 00:26:55.993 real 0m0.120s 00:26:55.993 user 0m0.053s 00:26:55.993 sys 0m0.065s 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 ************************************ 00:26:55.993 END TEST nvmf_target_disconnect_tc1 00:26:55.993 ************************************ 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 ************************************ 00:26:55.993 START TEST nvmf_target_disconnect_tc2 00:26:55.993 ************************************ 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3615069 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3615069 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3615069 ']' 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 [2024-11-19 17:44:57.528307] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:55.993 [2024-11-19 17:44:57.528353] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.993 [2024-11-19 17:44:57.606712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.993 [2024-11-19 17:44:57.648703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.993 [2024-11-19 17:44:57.648741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.993 [2024-11-19 17:44:57.648748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.993 [2024-11-19 17:44:57.648755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.993 [2024-11-19 17:44:57.648760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.993 [2024-11-19 17:44:57.650346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:55.993 [2024-11-19 17:44:57.650456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:55.993 [2024-11-19 17:44:57.650564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:55.993 [2024-11-19 17:44:57.650564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 Malloc0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 [2024-11-19 17:44:57.820155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.993 [2024-11-19 17:44:57.852395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.993 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3615122 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:55.994 17:44:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.920 17:44:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3615069 00:26:57.920 17:44:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Write completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 [2024-11-19 17:44:59.880750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.920 Read completed with error (sct=0, sc=8) 00:26:57.920 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 [2024-11-19 17:44:59.880955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 [2024-11-19 17:44:59.881152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Read completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 Write completed with error (sct=0, sc=8) 00:26:57.921 starting I/O failed 00:26:57.921 [2024-11-19 17:44:59.881344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.921 [2024-11-19 17:44:59.881510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.881532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.881788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.881806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.881961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.881973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.882199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.882210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.882359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.882370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.882465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.882475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.882604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.882615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-11-19 17:44:59.882848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-11-19 17:44:59.882879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.883027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.883061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.883204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.883235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.883411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.883443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.883568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.883599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.883805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.883836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.883955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.883987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.884873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.884884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.885973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.885983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.886970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.886981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.887137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.887148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.887277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.887287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.887412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.887422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.887586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.887596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.887735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.887746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.887890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.887900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.888028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.888039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-11-19 17:44:59.888173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-11-19 17:44:59.888183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.888326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.888336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.888474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.888484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.888579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.888589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.888736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.888747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.888833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.888843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.888920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.888929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.889942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.889958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.890975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.890986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.891934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.891944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-11-19 17:44:59.892024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-11-19 17:44:59.892034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.892956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.892967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.893940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.893959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.894926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.894940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.895933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.895953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.896112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.896127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.896306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.896321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.896453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.896468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-11-19 17:44:59.896566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-11-19 17:44:59.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.896803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.897014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.897048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.897235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.897528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.897559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.897810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.897841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.898971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.898986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.899188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.899203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.899359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.899374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.899458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.899479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.899634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.899782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.899824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.899977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.900011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.900233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.900264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.900508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.900523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.900598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-11-19 17:44:59.900612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-11-19 17:44:59.900816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.900831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.900963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.900979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.901195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.901224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.901396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.901611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.901640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.901816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.901847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.901969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.902218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.902373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.902524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.902713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.902815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.902933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.902953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.903063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.903078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.903287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.903303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.903524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.903598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.903613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.903773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.903787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.903874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.903888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-11-19 17:44:59.904768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-11-19 17:44:59.904899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.904914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.905884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.905983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.906902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.906916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.907847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.907865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.908017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.908036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.908176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.908194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.908285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.908303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-11-19 17:44:59.908471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-11-19 17:44:59.908489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.908572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.908591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.908778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.908814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.908985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.909019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.909155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.909186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.909377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.909409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.909657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.909688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.909869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.909900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.910088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.910126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.910325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.910486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.910517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.910697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.910730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.910978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.911175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.911350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.911470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.911646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.911818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.911916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.911933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.912022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.912040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.912201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.912219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.912406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.912437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.912560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.912591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.912823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.912855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.912984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.913018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.913254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.913272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.913425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-11-19 17:44:59.913443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-11-19 17:44:59.913544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.913562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.913655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.913673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.913878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.913896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.914110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.914129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.914286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.914304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.914377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.914395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.914565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.914583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.914675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.914693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.914833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.914853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.915914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.915932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.916054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.916079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.916232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.916253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.916465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.916485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.916651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.916670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.916840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.916871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.917066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.917100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.917371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.917402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.917650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.917682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.917806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.917837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.917975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.918002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-11-19 17:44:59.918172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-11-19 17:44:59.918198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.918375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.918402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.918574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.918605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.918872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.918903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.919130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.919302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.919344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.919520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.919547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.919725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.919751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.919938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.919979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.920152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.920189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.920447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.920477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.920713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.920745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.920955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.920987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.921236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.921268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.921526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.921552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.921709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.921736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.921971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.922003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.922183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.922216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.922413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.922455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.922702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.922727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.922917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.922943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.923227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.923253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.923444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.923469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.923672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.923698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.923938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.923980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.924243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.924275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.924509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-11-19 17:44:59.924540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-11-19 17:44:59.924745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.924776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.924962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.924990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.925183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.925214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.925396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.925426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.925593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.925626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.925893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.925923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.926120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.926152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.926342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.926369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.926560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.926721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.926972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.927006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.927183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.927214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.927393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.927423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.927591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.927621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.927796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.927827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.928016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.928049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.928236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.928268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.928454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.928485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.928657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.928688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.928893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.928924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.929123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.929156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.929283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.929314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.929452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.929490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.929781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-11-19 17:44:59.929812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-11-19 17:44:59.929928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.929967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.930238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.930269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.930518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.930551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.930685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.930716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.930897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.930929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.931206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.931238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.931367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.931399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.931577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.931608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.931709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.931740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.931865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.931895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.932013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.932045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.932283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.932315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.932493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.932525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.932700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.932732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.932847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.932878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.933068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.933102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.933285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.933316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.933522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.933554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.933671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.933703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.933966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.933999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.934186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.934331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.934363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.934551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.934582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-11-19 17:44:59.934867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-11-19 17:44:59.934899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.935035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.935067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.935309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.935341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.935563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.935595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.935796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.935827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.935943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.935986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.936117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.936149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.936354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.936385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.936670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.936702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.936821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.936851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.937089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.937122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.937294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.937326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.937446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.937477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.937655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.937686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.937807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.937839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.937961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.938000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.938241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.938273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.938394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.938425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.938544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.938575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.938754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.938785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.938971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.939005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.939190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.939221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.939394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.939425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.939633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.939665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.939905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.939937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.940075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.940106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-11-19 17:44:59.940238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-11-19 17:44:59.940271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.940458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.940488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.940667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.940699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.940886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.940917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.941056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.941090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.941282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.941313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.941482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.941513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.941625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.941656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.941844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.941877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.942098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.942295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.942498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.942530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.942649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.942681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.942966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.942998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.943113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.943144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.943318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.943350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.943544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.943576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.943698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.943729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.943912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.943943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.944136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.944168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.944403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.944434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.944546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.944577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.944785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.944816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.945027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.945061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.945299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.945330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.945506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.945538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.945656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-11-19 17:44:59.945687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-11-19 17:44:59.945854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.945886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.946121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.946154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.946392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.946430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.946545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.946576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.946701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.946853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.946884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.947122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.947155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.947345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.947375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.947484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.947516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.947650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.947682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.947916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.947957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.948138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.948169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.948358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.948390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.948494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.948524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.948634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.948666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.948861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.948891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.949117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.949149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.949333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.949364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.949564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.949596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.949717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.949747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.949955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.949988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.950186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.950216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.950384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.950416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.950615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.950645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.950825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.950856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.950968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-11-19 17:44:59.951002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-11-19 17:44:59.951238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.951270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.951509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.951541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.951711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.951743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.951871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.951902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.952055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.952215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.952356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.952490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.952641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.952786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.952992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.953026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.953188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.953425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.953457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.953580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.953610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.953797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.954092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.954124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.954230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.954441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.954472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.954732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.954764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.954961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.954994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.955182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.955213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.955395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.955425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.955549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.955579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.955782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.955813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.955988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.956021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.956256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.936 [2024-11-19 17:44:59.956286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.936 qpair failed and we were unable to recover it. 00:26:57.936 [2024-11-19 17:44:59.956466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.956498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.956778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.956809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.957015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.957048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.957175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.957206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.957400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.957431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.957606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.957636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.957750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.957781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.957967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.958000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.958142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.958174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.958363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.958394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.958578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.958609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.958789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.958820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.958942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.958989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.959175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.959206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.959486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.959518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.959693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.959724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.959986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.960019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.960151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.960183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.960301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.960333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.960598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.960629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.960757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.960789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.960914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.960944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.961072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.961104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.961297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.961328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.961438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.961469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.961582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.961612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.937 qpair failed and we were unable to recover it. 00:26:57.937 [2024-11-19 17:44:59.961726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.937 [2024-11-19 17:44:59.961758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.961894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.961925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.962195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.962228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.962496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.962528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.962661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.962698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.962867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.962898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.963026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.963059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.963320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.963350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.963584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.963615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.963728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.963760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.963960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.963992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.964248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.964280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.964469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.964500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.964629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.964661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.964858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.964889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.965087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.965120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.965308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.965340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.965601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.965633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.965819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.965851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.966093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.966127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.938 qpair failed and we were unable to recover it. 00:26:57.938 [2024-11-19 17:44:59.966317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.938 [2024-11-19 17:44:59.966348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.966449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.966481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.966688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.966719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.966844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.966877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.966985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.967016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.967190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.967222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.967459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.967490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.967701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.967733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.967897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.967927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.968204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.968236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.968475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.968507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.968694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.968726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.968933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.968985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.969225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.969255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.969443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.969475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.969742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.969775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.969966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.969999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.970170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.970201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.970335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.970365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.970537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.970569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.970749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.970779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.970924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.971169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.971200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.971438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.971604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.971641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.971829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.939 [2024-11-19 17:44:59.971859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.939 qpair failed and we were unable to recover it. 00:26:57.939 [2024-11-19 17:44:59.972131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.972163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.972439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.972573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.972603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.972821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.972938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.972978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.973170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.973201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.973334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.973365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.973570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.973601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.973780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.973812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.974026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.974058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.974192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.974221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.974406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.974437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.974555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.974586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.974768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.974799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.974988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.975020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.975203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.975236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.975364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.975395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.975638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.975670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.975875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.975906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.976182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.976216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.976419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.976451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.976633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.976664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.976850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.976882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.977022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.977056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.977161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.977191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.977319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.977350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.940 qpair failed and we were unable to recover it. 00:26:57.940 [2024-11-19 17:44:59.977627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.940 [2024-11-19 17:44:59.977658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.977927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.977969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.978091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.978121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.978378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.978409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.978537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.978568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.978757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.978787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.978917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.979164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.979195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.979321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.979353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.979536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.979567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.979682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.979713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.979902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.979932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.980150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.980188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.980368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.980399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.980525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.980556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.980728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.980759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.980938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.980981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.981223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.981254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.981445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.981477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.981656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.981686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.981802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.981833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.982073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.982105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.982319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.982350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.982466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.982496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.982683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.982713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.982827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.982857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.941 qpair failed and we were unable to recover it. 00:26:57.941 [2024-11-19 17:44:59.983105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.941 [2024-11-19 17:44:59.983138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.983327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.983358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.983530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.983561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.983686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.983718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.983919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.983957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.984217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.984248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.984422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.984454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.984588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.984620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.984807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.984839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.984966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.984999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.985262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.985294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.985485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.985517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.985642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.985674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.985866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.985898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.986077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.986111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.986294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.986324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.986447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.986479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.986723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.986754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.986923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.986965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.987138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.987168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.987353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.987385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.987563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.987593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.987774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.987807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.987929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.987970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.988080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.988111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.988294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.988326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.988572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.988610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.988714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.988744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.988932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.988977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.989163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.989195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.989458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.989490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.989673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.989704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.989843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.989875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.990177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.990209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.990447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.942 [2024-11-19 17:44:59.990478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.942 qpair failed and we were unable to recover it. 00:26:57.942 [2024-11-19 17:44:59.990664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.990695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.990809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.990840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.990971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.991003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.991138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.991169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.991404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.991435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.991637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.991669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.991779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.991810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.991924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.991963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.992206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.992236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.992419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.992450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.992622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.992653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.992780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.992812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.992980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.993012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.993183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.993214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.993397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.993428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.993606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.993638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.993901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.993932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.994133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.994165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.994428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.994501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.994622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.994645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.994796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.994817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.994899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.994918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.995035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.995057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.995144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.995162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.995330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.995351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.995791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.995905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.995936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.996118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.996150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.996332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.996364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.996570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.996601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.996706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.996738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.996867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.996899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.997040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.997073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.997308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.997340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.997517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.997537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.997631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.943 [2024-11-19 17:44:59.997652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.943 qpair failed and we were unable to recover it. 00:26:57.943 [2024-11-19 17:44:59.997812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.997834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.998020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.998040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.998200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.998221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.998432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.998452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.998615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.998635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.998735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.998755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.998843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.998865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:44:59.999924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:44:59.999942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.000050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.000070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.000176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.000195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.000425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.000445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.000545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.000565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.000778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.000798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.000894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.000914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.001908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.001929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.002028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.002048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.002235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.002256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.002353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.002374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.002557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.002578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.944 [2024-11-19 17:45:00.002737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.944 [2024-11-19 17:45:00.002757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.944 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.002864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.002885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.002979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.003939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.003966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.004056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.004075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.004159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.004179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.004331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.004351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.004578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.004599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.004708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.004728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.004894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.004916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.005888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.005908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.006959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.006979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.007092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.007116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.007228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.007250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.007363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.007383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.007467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.007486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.007564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.007583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.945 [2024-11-19 17:45:00.007785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.945 [2024-11-19 17:45:00.007806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.945 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.007991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.008970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.008992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.009081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.009100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.009349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.009369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.009469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.009489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.009644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.009664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.009822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.009843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.009962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.009983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.010901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.010922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.011945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.011971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.012175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.012291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.012469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.012577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.012684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.012823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.012971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.013023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.013218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.013243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.013376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.946 [2024-11-19 17:45:00.013451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.946 qpair failed and we were unable to recover it. 00:26:57.946 [2024-11-19 17:45:00.013575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.013599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.013701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.013744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.013912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.014002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.014157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.014193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.014472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.014506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.014615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.014642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.014726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.014746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.014888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.014978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.015906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.015993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.016864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.016981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.017882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.017978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.018088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.018189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.018305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.018405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.018644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.947 [2024-11-19 17:45:00.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.947 [2024-11-19 17:45:00.018765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.947 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.018855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.018876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.018974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.018996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.019925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.019945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.020107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.020129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.020224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.020245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.020353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.020516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.020640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.020661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.020835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.020855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.021849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.021869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.022043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.022159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.022289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.022485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.022611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.022822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.022984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.023915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.948 [2024-11-19 17:45:00.023936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.948 qpair failed and we were unable to recover it. 00:26:57.948 [2024-11-19 17:45:00.024102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.024123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.024337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.024358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.024500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.024520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.024642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.024794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.024815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.024964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.024985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.025101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.025270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.025435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.025623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.025743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.025854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.025987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.026221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.026401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.026569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.026692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.026805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.026917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.026937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.027961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.027983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.028135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.028156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.028242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.028263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.028452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.028473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.028580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.028752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.028772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.028928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.028953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.029049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.029070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.949 [2024-11-19 17:45:00.029187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.949 [2024-11-19 17:45:00.029207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.949 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.029290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.029311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.029393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.029412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.029583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.029603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.029712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.029733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.029913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.029933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.030906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.030995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.031835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.031994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.032015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.032217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.032237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.032315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.032336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.032481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.032724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.032760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.032988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.033899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.033982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.034003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.034087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.034107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.034271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.034291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.034462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.034483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.950 [2024-11-19 17:45:00.034726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.950 [2024-11-19 17:45:00.034751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.950 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.034840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.034860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.034935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.034975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.035124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.035145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.035302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.035323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.035472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.035492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.035651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.035672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.035757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.035776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.035875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.035895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.036847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.036868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.037885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.037905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.038894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.038941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.039062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.039088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.039219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.039245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.039344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.039368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.039486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.039512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.039680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.039705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.039858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.039912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.040181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.951 [2024-11-19 17:45:00.040252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.951 qpair failed and we were unable to recover it. 00:26:57.951 [2024-11-19 17:45:00.040462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.040542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.040770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.040825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.041120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.041163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.041427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.041469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.041644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.041829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.041884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.042276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.042390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.042571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.042620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.042797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.042892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.043889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.043916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.044842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.044863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.045914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.045935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.046132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.046153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.046238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.046258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.046355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.046376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.046477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.046498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.046675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.952 [2024-11-19 17:45:00.046695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.952 qpair failed and we were unable to recover it. 00:26:57.952 [2024-11-19 17:45:00.046795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.046816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.047896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.047996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.048234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.048345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.048644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.048827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.048958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.048980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.049851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.049871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.050891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.050912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.051009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.953 [2024-11-19 17:45:00.051030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.953 qpair failed and we were unable to recover it. 00:26:57.953 [2024-11-19 17:45:00.051128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.051148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.051307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.051328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.051490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.051510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.051611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.051631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.051739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.051761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.051922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.051942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.052870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.052891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.053060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.053149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.053169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.053328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.053349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.053482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.053527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.053787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.053828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.053977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.054851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.054978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.055932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.055959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.056066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.056087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.056194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.056214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.056428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.056448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.056601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.056622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.056723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-19 17:45:00.056743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.954 qpair failed and we were unable to recover it. 00:26:57.954 [2024-11-19 17:45:00.056895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.056915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.057958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.057980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.058762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.058783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.059882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.059902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.060122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.060144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.060231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.060252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.060353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.060375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.060466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.060487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.060647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.060667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.060883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.060904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.061024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.061160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.061352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.061470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.061581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.955 [2024-11-19 17:45:00.061692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.955 qpair failed and we were unable to recover it. 00:26:57.955 [2024-11-19 17:45:00.061851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.061872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.061958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.061979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.062123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.062144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.062312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.062333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.062422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.062442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.062592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.062613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.062695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.062716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.062872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.062893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.063066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.063088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.063378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.063398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.063618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.063639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.063735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.063755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.063856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.063877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.063972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.063994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.064920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.064940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.065200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.065221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.065381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.065402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.065498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.065519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.065669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.065690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.065784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.065809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.065889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.065909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.066087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.066252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.066359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.066485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.066600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.066839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.066998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.067020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.067119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.067140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.956 [2024-11-19 17:45:00.067231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.956 [2024-11-19 17:45:00.067252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.956 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.067401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.067421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.067603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.067623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.067713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.067734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.067847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.067867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.068924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.068945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.069040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.069061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.069167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.069188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.069280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.069300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.069522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.069543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.069700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.069721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.069897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.069918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.070085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.070107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.070208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.070229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.070329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.070349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.070446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.070466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.070627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.070648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.070862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.070883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.071914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.071935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.072767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.957 [2024-11-19 17:45:00.072788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.957 qpair failed and we were unable to recover it. 00:26:57.957 [2024-11-19 17:45:00.073045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.073962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.073983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.074073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.074094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.074199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.074219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.074309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.074330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.074509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.074529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.074676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.074697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.074779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.074800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.075836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.075988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.076125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.076319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.076505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.076625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.076735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.076859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.077046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.077211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.077231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.077375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.077396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.958 [2024-11-19 17:45:00.077490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.958 [2024-11-19 17:45:00.077510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.958 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.077617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.077637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.077798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.077818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.077995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.078094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.078114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.078217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.078238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.078478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.078498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.078599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.078620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.078788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.078808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.078904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.078924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.079888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.079991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.080829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.080849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.081087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.081108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.081281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.081302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.081455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.081475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.081574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.081594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.081755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.081776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.081865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.081885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.959 [2024-11-19 17:45:00.082825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.959 qpair failed and we were unable to recover it. 00:26:57.959 [2024-11-19 17:45:00.082913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.082933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.083908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.083928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.084913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.084935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.085909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.085929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.086970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.086991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.087152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.087173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.087275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.087295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.087386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.960 [2024-11-19 17:45:00.087406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.960 qpair failed and we were unable to recover it. 00:26:57.960 [2024-11-19 17:45:00.087610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.087630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.087777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.087798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.087889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.087909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.088884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.088904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.089933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.089958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.090941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.090968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.091937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.091962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.092143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.092247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.092433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.092527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.092689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.092821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.092979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.093920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.093941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.961 qpair failed and we were unable to recover it. 00:26:57.961 [2024-11-19 17:45:00.094805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.961 [2024-11-19 17:45:00.094825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.094909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.094928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.095984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.096894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.096998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.097901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.097991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.098103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.098208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.098384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.098584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.098710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.098823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.098842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.099968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.099989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.100978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.100998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.101902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.101988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.102008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.102089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.102108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.102328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.102388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.102523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.102558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.102741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.102774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.102946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.102995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.103143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.103283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.103415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.103540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.103644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.103755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.103979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.104000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.962 qpair failed and we were unable to recover it. 00:26:57.962 [2024-11-19 17:45:00.104152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.962 [2024-11-19 17:45:00.104172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.104320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.104340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.104489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.104508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.104682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.104702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.104940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.104968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.105901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.105921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.106904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.106936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.107139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.107171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.107299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.107331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.107547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.107579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.107841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.107873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.107996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.108216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.108429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.108568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.108712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.108849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.108957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.108977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.109163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.109195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.109317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.109349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.109452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.109482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.109593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.109626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.109811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.109845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.110039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.110073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.110249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.110281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.110384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.110415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.110525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.110557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.110830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.110863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.111042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.111064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.111227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.111259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.111375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.111404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.111584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.111616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.111804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.111824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.111926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.111969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.112968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.112988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.113911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.113931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.114925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.114946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.115168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.115188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.115266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.115286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.115559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.115630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.115757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.115792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.115910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.115942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.963 [2024-11-19 17:45:00.116214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.963 [2024-11-19 17:45:00.116248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.963 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.116376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.116408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.116536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.116569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.116675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.116698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.116860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.116892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.117033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.117065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.117178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.117456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.117487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.117655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.117675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.117781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.117800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.117941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.117966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.118134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.964 [2024-11-19 17:45:00.118165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:57.964 qpair failed and we were unable to recover it. 00:26:57.964 [2024-11-19 17:45:00.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.118337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.118517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.118548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.118725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.118758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.118887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.118918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.119181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.119222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.119392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.119592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.119624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.119812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.119845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.119964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.119997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.120170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.120208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.120384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.120406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.120568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.120588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.120678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.120697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.120801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.120821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.120925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.120944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.121924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.121944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.122963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.250 [2024-11-19 17:45:00.122982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.250 qpair failed and we were unable to recover it. 00:26:58.250 [2024-11-19 17:45:00.123076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.123897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.123915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.124956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.124977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.125156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.125258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.125431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.125543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.125651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.125832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.125980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.126086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.126225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.126413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.126647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.126833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.126945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.126969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.127912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.127935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.128035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.128057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.128221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.251 qpair failed and we were unable to recover it. 00:26:58.251 [2024-11-19 17:45:00.128310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.251 [2024-11-19 17:45:00.128328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.128418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.128437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.128655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.128674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.128845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.128865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.128973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.128993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.129151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.129170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.129383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.129563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.129583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.129729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.129748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.129856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.129875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.129967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.129986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.130911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.130929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.131821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.131844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.132962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.132982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.133089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.133109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.133207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.133390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.133411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.133503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.133522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.133605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.133625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.252 [2024-11-19 17:45:00.133791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.252 [2024-11-19 17:45:00.133811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.252 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.133890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.133911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.134891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.134910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.135085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.135106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.135272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.135293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.135448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.135468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.135628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.135648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.135811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.135832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.135994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.136017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.136181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.136202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.136414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.136433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.136579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.136599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.136747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.136768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.136868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.136887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.136993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.137014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.137104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.137124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.137308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.137328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.137484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.137504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.137755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.137776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.137868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.137887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.138033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.138053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.138153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.138173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.138346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.138501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.138521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.138732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.138753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.138995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.139016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.139119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.139138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.139228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.139247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.139409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.139429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.139651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.139670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.139823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.139842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.140001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.140023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.253 [2024-11-19 17:45:00.140173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.253 [2024-11-19 17:45:00.140193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.253 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.140297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.140316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.140414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.140432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.140602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.140622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.140707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.140727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.140875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.140894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.141043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.141065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.141321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.141341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.141436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.141455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.141535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.141554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.141728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.141905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.141924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.142913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.142933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.143889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.143910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.254 qpair failed and we were unable to recover it. 00:26:58.254 [2024-11-19 17:45:00.144964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.254 [2024-11-19 17:45:00.144985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.145066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.145085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.145167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.145186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.145347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.145367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.145578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.145688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.145708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.145818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.145837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.146004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.146027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.146121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.146141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.146388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.146410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.146511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.146534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.146696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.146716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.146883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.146903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.147911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.147995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.148964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.148985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.149957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.149982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.255 [2024-11-19 17:45:00.150130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.255 [2024-11-19 17:45:00.150150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.255 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.150236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.150256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.150350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.150370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.150530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.150552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.150649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.150669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.150822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.150843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.150991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.151755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.151972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.152043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.152296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.152497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.152532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.152656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.152689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.152810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.152843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.152972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.153923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.153942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.154889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.154909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.155008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.155029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.155187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.155207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.155287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.155307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.155411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.155432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.155518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.155538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.256 [2024-11-19 17:45:00.155616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.256 [2024-11-19 17:45:00.155635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.256 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.155727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.155747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.155848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.155882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.155996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.156213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.156349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.156498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.156651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.156792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.156911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.156930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.157135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.157156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.157320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.157341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.157456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.157475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.157633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.157654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.157806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.157826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.157917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.157937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.158918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.158937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.159892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.159911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.160810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.160830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.161042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.257 [2024-11-19 17:45:00.161063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.257 qpair failed and we were unable to recover it. 00:26:58.257 [2024-11-19 17:45:00.161149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.161255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.161367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.161528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.161641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.161817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.161919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.161941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.162885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.162903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.163900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.163919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.164905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.164924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.165031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.165050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.165203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.165223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.165316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.165336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.165533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.165553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.165650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.165668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.258 [2024-11-19 17:45:00.165819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.258 [2024-11-19 17:45:00.165838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.258 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.165919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.165937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.166926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.166945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.167830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.167849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.168889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.168907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.169906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.169926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.170079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.170099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.170187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.170205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.259 [2024-11-19 17:45:00.170288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.259 [2024-11-19 17:45:00.170310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.259 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.170418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.170436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.170594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.170614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.170696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.170715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.170805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.170827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.170978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.170999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.171920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.171939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.172966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.172987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.173871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.173982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.174975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.174994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.175084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.175104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.175197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.260 [2024-11-19 17:45:00.175217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.260 qpair failed and we were unable to recover it. 00:26:58.260 [2024-11-19 17:45:00.175299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.175320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.175406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.175426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.175519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.175546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.175715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.175793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.175813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.175891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.175908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.176064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.176086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.176232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.176251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.176337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.176359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.176516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.176535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.176711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.176731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.176897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.177931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.177967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.178142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.178162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.178400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.178420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.178616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.178636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.178735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.178755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.178910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.178930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.179091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.179112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.179263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.179284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.179442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.179463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.179563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.179583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.179694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.179718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.179868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.179888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.180874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.180893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.261 qpair failed and we were unable to recover it. 00:26:58.261 [2024-11-19 17:45:00.181006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.261 [2024-11-19 17:45:00.181025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.181240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.181261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.181426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.181447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.181542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.181562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.181718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.181988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.182015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.182208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.182229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.182330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.182350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.182500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.182522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.182677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.182697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.182801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.182821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.182984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.183943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.183986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.184915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.184936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.185935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.185962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.186061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.186081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.186175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.186197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.186282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.186302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.186384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.186405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.186551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.186571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.262 [2024-11-19 17:45:00.186677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.262 [2024-11-19 17:45:00.186701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.262 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.186814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.186834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.187908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.187928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.188923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.188943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.189040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.189061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.189175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.189345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.189365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.189516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.189536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.189636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.189656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.189823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.189844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.190032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.190052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.190281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.190301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.190448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.190468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.190566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.190587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.190752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.190772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.190878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.190898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.191088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.191212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.191384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.191616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.191729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.263 [2024-11-19 17:45:00.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.263 qpair failed and we were unable to recover it. 00:26:58.263 [2024-11-19 17:45:00.191992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.192097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.192293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.192467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.192638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.192745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.192937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.192963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.193943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.193969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.194916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.194936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.195841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.195860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.196819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.196840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.197000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.264 [2024-11-19 17:45:00.197021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.264 qpair failed and we were unable to recover it. 00:26:58.264 [2024-11-19 17:45:00.197101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.197940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.197975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.198893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.198987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.199946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.199982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.200964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.200985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.265 [2024-11-19 17:45:00.201960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.265 [2024-11-19 17:45:00.201991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.265 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.202901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.202921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.203916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.203937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.204938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.204965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.205874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.205987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.266 [2024-11-19 17:45:00.206900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.266 qpair failed and we were unable to recover it. 00:26:58.266 [2024-11-19 17:45:00.206987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.207898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.207917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.208885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.208904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.209945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.209994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.210885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.210905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.211005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.211026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.211104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.211124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.211207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.211227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.211301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.211321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.267 qpair failed and we were unable to recover it. 00:26:58.267 [2024-11-19 17:45:00.211472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.267 [2024-11-19 17:45:00.211493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.211638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.211657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.211753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.211772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.211853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.211873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.211970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.211992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.212968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.212988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.213891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.213911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.214901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.214921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.215014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.215034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.215188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.215208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.215367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.215387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.215460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.215480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.215692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.268 [2024-11-19 17:45:00.215712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.268 qpair failed and we were unable to recover it. 00:26:58.268 [2024-11-19 17:45:00.215784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.215803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.215885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.215907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.216935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.216961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.217836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.217855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.218900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.218984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.219880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.219983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.220089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.220199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.220307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.220482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.220588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.269 [2024-11-19 17:45:00.220754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.269 [2024-11-19 17:45:00.220775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.269 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.220925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.220944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.221905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.221990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.222977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.222997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.223930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.223955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.224952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.224974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.225135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.225156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.225250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.225270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.225417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.225439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.225515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.270 [2024-11-19 17:45:00.225533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.270 qpair failed and we were unable to recover it. 00:26:58.270 [2024-11-19 17:45:00.225695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.225717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.225811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.225829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.225989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.226839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.226857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.227907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.227927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.228930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.228955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.229820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.229839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.271 [2024-11-19 17:45:00.230002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.271 [2024-11-19 17:45:00.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.271 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.230238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.230314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.230446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.230493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.230675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.230708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.230831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.230863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.230982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.231017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.231198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.231231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.231415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.231448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.231554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.231589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.231704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.231727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.231882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.231902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.232832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.232997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.233889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.233992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.234156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.234176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.234273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.234293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.234392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.234411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.234583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.234673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.272 [2024-11-19 17:45:00.234693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.272 qpair failed and we were unable to recover it. 00:26:58.272 [2024-11-19 17:45:00.234859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.234880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.235974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.235994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.236922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.236942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.237857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.238043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.238064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.238165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.238185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.273 [2024-11-19 17:45:00.238282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.273 [2024-11-19 17:45:00.238302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.273 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.238384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.238405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.238484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.238503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.238657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.238677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.238831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.238850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.238936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.238963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.239943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.239970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.240913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.240933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.241902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.241922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.242053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.242075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.242151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.242171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.242248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.242268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.242370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.274 [2024-11-19 17:45:00.242390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.274 qpair failed and we were unable to recover it. 00:26:58.274 [2024-11-19 17:45:00.242478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.242498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.242580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.242599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.242786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.242806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.242885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.242905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.242997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.243913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.244925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.244946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.245877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.245901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.246000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.246022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.246105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.246125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.246298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.246318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.246447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.246467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.246572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.246592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.275 qpair failed and we were unable to recover it. 00:26:58.275 [2024-11-19 17:45:00.246748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.275 [2024-11-19 17:45:00.246768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.246861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.246882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.246973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.246994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.247880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.247901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.248975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.248995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.249958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.249979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.250062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.250081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.250170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.250189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.250274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.250293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.250377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.276 [2024-11-19 17:45:00.250395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.276 qpair failed and we were unable to recover it. 00:26:58.276 [2024-11-19 17:45:00.250519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.250540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.250629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.250648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.250771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.250791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.250882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.250900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.250995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.251972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.251993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.252936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.252962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.253880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.253901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.254048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.254070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.254164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.254185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.254278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.254299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.277 [2024-11-19 17:45:00.254391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.277 [2024-11-19 17:45:00.254411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.277 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.254504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.254524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.254608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.254628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.254730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.254751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.254835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.254855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.254957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.254979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.255942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.255967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.256964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.256986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.257087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.257107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.257202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.257221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.257307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.257325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.257417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.257437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.278 [2024-11-19 17:45:00.257585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.278 [2024-11-19 17:45:00.257605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.278 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.257685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.257704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.257858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.257983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.258888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.258911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.259883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.259903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.260917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.260937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.261034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.261054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.261160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.261197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.261385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.261418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.261528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.261561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.261702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.261734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.279 qpair failed and we were unable to recover it. 00:26:58.279 [2024-11-19 17:45:00.261829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.279 [2024-11-19 17:45:00.261851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.261938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.261972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.262957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.262979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.263907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.263984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.264977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.264998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.265954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.280 [2024-11-19 17:45:00.265975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.280 qpair failed and we were unable to recover it. 00:26:58.280 [2024-11-19 17:45:00.266071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.266855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.266875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.267880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.267900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.268940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.268980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.281 [2024-11-19 17:45:00.269816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.281 [2024-11-19 17:45:00.269835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.281 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.270883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.270903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.271917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.271940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.272816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.272836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.273892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.273912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.274018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.274038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.274139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.274159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.274241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.274261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.274339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.274359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.274435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.282 [2024-11-19 17:45:00.274455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-11-19 17:45:00.274645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.274665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.274813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.274832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.274917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.274937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.275832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.275853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.276092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.276269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.276478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.276595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.276804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.276905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.276996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.277917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.277937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.278041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.278311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.278425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.278537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.283 [2024-11-19 17:45:00.278637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-11-19 17:45:00.278716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.278736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.278818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.278837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.278924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.278944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.279968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.279989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.280818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.280994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.284 qpair failed and we were unable to recover it. 00:26:58.284 [2024-11-19 17:45:00.281908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.284 [2024-11-19 17:45:00.281928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.282848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.282867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.283927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.283952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.284053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.284076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.284236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.284256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.284416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.284446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.284646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.284677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.284864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.284895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.285975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.285996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.286975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.286997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.285 qpair failed and we were unable to recover it. 00:26:58.285 [2024-11-19 17:45:00.287217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.285 [2024-11-19 17:45:00.287237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.287331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.287352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.287508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.287529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.287691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.287711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.287802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.287822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.287923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.287943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.288049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.288069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.288178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.288333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.288357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.288453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.288474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.288619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.288639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.288853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.288874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.289958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.289979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.290079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.290099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.290194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.290213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.290355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.290375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.290529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.290548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.290655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.290675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.290893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.290913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.286 [2024-11-19 17:45:00.291808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.286 [2024-11-19 17:45:00.291828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.286 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.291929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.291955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.292049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.292068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.292174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.292203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.292290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.292309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.292464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.292485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.292577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.292597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.292849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.292920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.293077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.293114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.293387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.293419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.293526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.293559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.293784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.293972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.293992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.294114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.294134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.294226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.294246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.294396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.294416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.294504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.294524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.294681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.294701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.294891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.294911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.295129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.295150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.295251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.295271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.295416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.295435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.295600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.295620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.295720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.295739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.295897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.295918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.296235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.296391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.296513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.296748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.296877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.296983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.297004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.297243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.297264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.287 qpair failed and we were unable to recover it. 00:26:58.287 [2024-11-19 17:45:00.297340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.287 [2024-11-19 17:45:00.297360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.297454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.297474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.297637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.297656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.297751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.297772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.297988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.298893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.298992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.299175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.299295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.299401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.299518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.299627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.299791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.300935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.300984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.301150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.301170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.301381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.301402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.301569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.301732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.301752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.301844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.301864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.301959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.302154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.288 [2024-11-19 17:45:00.302174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.288 qpair failed and we were unable to recover it. 00:26:58.288 [2024-11-19 17:45:00.302280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.302300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.302462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.302481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.302574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.302594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.302677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.302697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.302853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.302873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.302980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.303912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.303932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.304173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.304296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.304380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.304400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.304478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.304498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.304740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.304759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.304923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.304943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.305089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.305109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.305193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.305212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.305309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.305328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.305506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.305525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.305614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.305633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.289 qpair failed and we were unable to recover it. 00:26:58.289 [2024-11-19 17:45:00.305794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.289 [2024-11-19 17:45:00.305814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.305902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.305921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.306907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.306991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.307835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.307980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.308888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.308994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.309015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.309093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.309112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.309196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.309216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.309312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.309331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.309486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.309506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.309595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.290 [2024-11-19 17:45:00.309614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.290 qpair failed and we were unable to recover it. 00:26:58.290 [2024-11-19 17:45:00.309710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.309730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.309921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.309940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.310129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.310239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.310410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.310582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.310699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.310810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.310980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.311907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.311990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.312011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.312167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.312187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.312281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.312301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.312446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.312466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.312659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7af0 is same with the state(6) to be set 00:26:58.291 [2024-11-19 17:45:00.312933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.313022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.313166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.313203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.313347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.313381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.313622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.313655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.313876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.313908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.314946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.291 [2024-11-19 17:45:00.314973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-11-19 17:45:00.315061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.315181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.315282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.315481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.315651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.315823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.315935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.315962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.316905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.316924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.317909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.317929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.318850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.318874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.319024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.319045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.319123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.319143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-11-19 17:45:00.319232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.292 [2024-11-19 17:45:00.319251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.319361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.319381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.319468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.319488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.319638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.319658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.319771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.319791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.319942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.319995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.320101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.320121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.320338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.320358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.320511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.320530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.320683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.320704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.320779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.320798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.320897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.320917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.321928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.321951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.322106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.322126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.322225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.322245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.322392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.322412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.322566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.322585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.322684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.322705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.322860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.322882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.323934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.323958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.324129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.324149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.324307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.324328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.324518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.324538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.324686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.293 [2024-11-19 17:45:00.324706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-11-19 17:45:00.324811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.324830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.324929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.324975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.325952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.325972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.326935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.326962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.327956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.327976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.328074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.328094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.328177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.328197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.328288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.328307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.328451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.328471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.328548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.328568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.328759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.328795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.329096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.329264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.329410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.329529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.329645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.294 [2024-11-19 17:45:00.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.294 [2024-11-19 17:45:00.329973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.329994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.330909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.330930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.331933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.331958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.332897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.332917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.333963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.333984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.334082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.334101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.334254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.334274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.334376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.295 [2024-11-19 17:45:00.334397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.295 qpair failed and we were unable to recover it. 00:26:58.295 [2024-11-19 17:45:00.334548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.334569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.334660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.334680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.334759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.334780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.334872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.334892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.334971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.334992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.335966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.335986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.336154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.336178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.336360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.336391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.336505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.336536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.336720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.336751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.336938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.336963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.337110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.337130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.337229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.337249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.337356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.337376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.337541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.337580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.337719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.337750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.337860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.337892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.338009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.338041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.338226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.338247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.338450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.338481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.338618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.338733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.338764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.338939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.338982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.339177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.339220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.339341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.339448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.339468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.339625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.339645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.339860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.339890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.296 qpair failed and we were unable to recover it. 00:26:58.296 [2024-11-19 17:45:00.340081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.296 [2024-11-19 17:45:00.340116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.340301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.340333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.340441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.340461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.340546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.340566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.340659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.340680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.340768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.340791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.340961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.340982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.341937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.341973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.342908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.342926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.343905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.343925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.344962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.344982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.345224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.345244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.345342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.345362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.345477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.345497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.345594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.345614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.297 qpair failed and we were unable to recover it. 00:26:58.297 [2024-11-19 17:45:00.345711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.297 [2024-11-19 17:45:00.345732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.345830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.345850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.345932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.345958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.346848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.346866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.347936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.347962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.348945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.348992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.349962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.349983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.350140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.350160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.350262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.350281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.298 [2024-11-19 17:45:00.350433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.298 [2024-11-19 17:45:00.350454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.298 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.350555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.350575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.350660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.350680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.350765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.350785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.350863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.350883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.351891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.351911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.352965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.352985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.353954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.353978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.357119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.357156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.357347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.357391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.299 [2024-11-19 17:45:00.357547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.299 [2024-11-19 17:45:00.357567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.299 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.357735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.357755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.357899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.357918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.358037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.358058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.358232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.358252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.358399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.358419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.358563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.358583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.358747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.358766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.358932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.358957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.359107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.359127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.359229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.359249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.359468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.359488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.359590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.359610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.359707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.359727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.359880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.359900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.360069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.360090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.360331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.360351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.360440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.360459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.360632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.360652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.360838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.360857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.360967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.360988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.361132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.361162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.361272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.361292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.361454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.361473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.361563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.361587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.361752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.361771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.361919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.300 [2024-11-19 17:45:00.361939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.300 qpair failed and we were unable to recover it. 00:26:58.300 [2024-11-19 17:45:00.362036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.362163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.362338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.362449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.362635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.362751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.362855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.362874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.363970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.363992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.364091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.364111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.364214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.364234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.364390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.364409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.364497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.364516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.364740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.364759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.364910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.364930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.365045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.365065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.365223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.365294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.365532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.365567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.365734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.365756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.365928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.365955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.301 [2024-11-19 17:45:00.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.301 [2024-11-19 17:45:00.366865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.301 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.367025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.367046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.367267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.367287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.367441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.367461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.367606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.367626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.367729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.367749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.367831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.367850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.368878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.368897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.369930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.369967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.370962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.370981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.371132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.371153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.371237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.371255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.371405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.371425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.371512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.302 [2024-11-19 17:45:00.371530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.302 qpair failed and we were unable to recover it. 00:26:58.302 [2024-11-19 17:45:00.371638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.371676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.371863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.371895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.372122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.372154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.372330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.372361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.372467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.372499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.372668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.372699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.372811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.372842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.372956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.372990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.373953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.373975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.374143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.374163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.374239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.374258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.374415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.374436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.374519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.374539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.374619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.374639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.374855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.374875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.375945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.375992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.303 qpair failed and we were unable to recover it. 00:26:58.303 [2024-11-19 17:45:00.376112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.303 [2024-11-19 17:45:00.376143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.376262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.376295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.376421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.376452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.376632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.376662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.376758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.376780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.376929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.376956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.377890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.377909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.378942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.378979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.379065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.379085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.379226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.379246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.379409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.379429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.379525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.379544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.379650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.379669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.379829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.379849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.304 qpair failed and we were unable to recover it. 00:26:58.304 [2024-11-19 17:45:00.380975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.304 [2024-11-19 17:45:00.380995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.381183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.381202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.381344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.381364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.381471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.381490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.381655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.381675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.381770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.381790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.381973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.381993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.382141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.382160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.382255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.382275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.382367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.382386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.382461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.382481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.382718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.382935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.382960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.383113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.383132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.383296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.383316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.383415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.383434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.383596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.383616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.383830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.383849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.383942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.383975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.384129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.384152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.384238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.384258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.384353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.384373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.384520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.384540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.384632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.384651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.305 qpair failed and we were unable to recover it. 00:26:58.305 [2024-11-19 17:45:00.384798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.305 [2024-11-19 17:45:00.384817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.384915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.384935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.385896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.385916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.386962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.386983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.387137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.387157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.387314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.387333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.387571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.387590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.387675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.387694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.387898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.387986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.388202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.388273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.388459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.388484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.388668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.388688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.388847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.388867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.388964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.388986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.389153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.389173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.389459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.389478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.389589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.389609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.389685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.389705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.389854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.389874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.390122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.390146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.390240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.390260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.390357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.390378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.390534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.306 [2024-11-19 17:45:00.390555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.306 qpair failed and we were unable to recover it. 00:26:58.306 [2024-11-19 17:45:00.390654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.390674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.390756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.390776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.390871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.390891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.390984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.391855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.391876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.392877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.392897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.393902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.393922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.394025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.394047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.394193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.394213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.394312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.394333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.307 [2024-11-19 17:45:00.394434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.307 [2024-11-19 17:45:00.394454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.307 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.394611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.394631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.394712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.394732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.394812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.394833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.394924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.394944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.395903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.395996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.396200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.396305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.396475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.396585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.396704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.396801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.396822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.397938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.397970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.398067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.398092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.398275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.398295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.398448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.398468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.398620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.308 [2024-11-19 17:45:00.398641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.308 qpair failed and we were unable to recover it. 00:26:58.308 [2024-11-19 17:45:00.398728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.398852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.398872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.398963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.398984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.399069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.399089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.399182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.399202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.399311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.399332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.399545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.399566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.399717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.399737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.399884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.399904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.400858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.400878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.401027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.401047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.401210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.401230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.401399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.401420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.401567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.401587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.401695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.401716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.401867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.401887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.402943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.402992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.403106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.403126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.403314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.309 [2024-11-19 17:45:00.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.309 qpair failed and we were unable to recover it. 00:26:58.309 [2024-11-19 17:45:00.403443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.403463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.403554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.403574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.403725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.403745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.403824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.403844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.403990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.404961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.404982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.405952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.405980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.406079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.406099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.406313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.406333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.406486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.406505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.406720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.406741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.406822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.406842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.310 [2024-11-19 17:45:00.407870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.310 qpair failed and we were unable to recover it. 00:26:58.310 [2024-11-19 17:45:00.407961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.407982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.408960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.408982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.409885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.409908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.410067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.410088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.410164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.410183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.410341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.410362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.410442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.410463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.410675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.410695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.410793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.410813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.411050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.411072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.411237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.411256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.311 [2024-11-19 17:45:00.411442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.311 [2024-11-19 17:45:00.411463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.311 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.411558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.411578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.411663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.411683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.411783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.411803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.411910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.411929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.412093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.412163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.412302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.412347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.412493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.412527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.412636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.412659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.412755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.412776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.412923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.412943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.413915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.413935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.414835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.414986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.415008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.415112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.415270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.415290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.415373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.415392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.415471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.415491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.312 [2024-11-19 17:45:00.415590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.312 [2024-11-19 17:45:00.415611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.312 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.415695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.415719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.415817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.415838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.415928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.415954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.416964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.416985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.417222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.417242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.417401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.417421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.417603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.417623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.417725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.417889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.417909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.418899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.418999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.419127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.419248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.419493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.419767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.419867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.419887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.313 [2024-11-19 17:45:00.420106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.313 [2024-11-19 17:45:00.420126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.313 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.420217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.420237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.420399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.420419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.420517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.420536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.420680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.420699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.420781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.420801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.420958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.420979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.421145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.421257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.421490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.421587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.421712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.421891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.422945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.422993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.423959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.423979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.424943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.424970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.314 [2024-11-19 17:45:00.425065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.314 [2024-11-19 17:45:00.425085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.314 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.425252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.425272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.425422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.425442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.425593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.425613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.425697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.425716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.425813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.425833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.425910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.425930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.426131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.426250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.426352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.426467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.426579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.426815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.426988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.427909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.427930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.428092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.428113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.428191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.428211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.428440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.428461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.428617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.428636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.428728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.428748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.428899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.428919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.429069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.429090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.429171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.429190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.429350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.429370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.429537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.315 [2024-11-19 17:45:00.429557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.315 qpair failed and we were unable to recover it. 00:26:58.315 [2024-11-19 17:45:00.429741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.429761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.429857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.429877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.429971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.429992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.430098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.430118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.430270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.430289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.430452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.430473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.430630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.430650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.430815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.430928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.430954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.431108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.431128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.431224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.431244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.431486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.431506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.431685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.431705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.431888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.432032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.432053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.432162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.432183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.432400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.432420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.432658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.432678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.432755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.432774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.432927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.432952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.433110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.433131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.433230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.433251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.433364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.433384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.433531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.433551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.433808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.433886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.433906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.434027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.316 [2024-11-19 17:45:00.434052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.316 qpair failed and we were unable to recover it. 00:26:58.316 [2024-11-19 17:45:00.434154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.434174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.434270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.434290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.434486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.434506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.434588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.434609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.434696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.434716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.434863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.434884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.434979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.435901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.435998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.436020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.436182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.436203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.436366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.436385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.436540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.436561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.436734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.436754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.436906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.436925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.437098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.437119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.437267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.437288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.437394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.437414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.437599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.437619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.437805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.437825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.437982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.438215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.438235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.438473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.438498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.317 [2024-11-19 17:45:00.438659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.317 [2024-11-19 17:45:00.438680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.317 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.438780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.438799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.438958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.438980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.439970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.439991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.440156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.440177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.440333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.440353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.440466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.440487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.440595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.440615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.440704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.440725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.440882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.440903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.441064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.441085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.441176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.441197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.441429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.441449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.441608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.441628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.441727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.441747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.441846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.441865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.442039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.442129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.442149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.442231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.442251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.442359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.442378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.442528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.442548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.318 [2024-11-19 17:45:00.442642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.318 [2024-11-19 17:45:00.442662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.318 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.442761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.442782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.442964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.442986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.443920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.443963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.444137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.444240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.444440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.444609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.444786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.444890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.444994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.445016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.445120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.445141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.445236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.445256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.445413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-11-19 17:45:00.445433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.606 qpair failed and we were unable to recover it. 00:26:58.606 [2024-11-19 17:45:00.445527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.445546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.445645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.445666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.445834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.445854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.445999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.446879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.446899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.447866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.447886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.448942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.448968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.449060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.449080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.449232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.449253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.449333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.449353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.449510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.449530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.607 [2024-11-19 17:45:00.449683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.607 [2024-11-19 17:45:00.449703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.607 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.449860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.449881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.449977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.449999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.450156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.450177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.450256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.450276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.450370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.450390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.450633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.450654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.450767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.450787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.450984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.451005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.451099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.451120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.451357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.451378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.451468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.451487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.451580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.451600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.451781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.451801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.452036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.452058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.452143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.452164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.452298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.452458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.452692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.452713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.452807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.452827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.453913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.453933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.454100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.454121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.454296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.454316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.454464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.454484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.454650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.454672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.454828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.454848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.454929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.454973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.608 qpair failed and we were unable to recover it. 00:26:58.608 [2024-11-19 17:45:00.455188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.608 [2024-11-19 17:45:00.455216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.455298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.455318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.455410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.455429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.455617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.455638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.455724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.455744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.455932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.455962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.456872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.456890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.457933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.457958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.458265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.458333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.458509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.458576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.458838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.458904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.459095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.459116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.459267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.459437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.459456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.459605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.459624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.459773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.459791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.459892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.459911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.460155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.460175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.460284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.460303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.460385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.460403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.609 [2024-11-19 17:45:00.460551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.609 [2024-11-19 17:45:00.460570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.609 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.460671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.460689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.460808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.460826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.461067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.461088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.461298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.461317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.461484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.461503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.461656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.461676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.461785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.461805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.461890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.461910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.462111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.462280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.462447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.462579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.462764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.462878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.462987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.463008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.463167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.463187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.463331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.463352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.463534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.463554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.463673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.463714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.463852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.463886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.463990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.464212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.464334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.464467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.464763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.464931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.464969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.465047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.465066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.465322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.465342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.465534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.465554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.465706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.465726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.465890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.465910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.466031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.610 [2024-11-19 17:45:00.466052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.610 qpair failed and we were unable to recover it. 00:26:58.610 [2024-11-19 17:45:00.466266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.466286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.466393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.466414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.466563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.466583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.466772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.466792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.466965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.466987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.467149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.467169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.467262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.467283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.467528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.467549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.467716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.467736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.467954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.467975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.468121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.468142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.468373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.468393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.468573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.468608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.468733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.468764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.468959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.468992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.469109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.469140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.469426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.469458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.469628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.469660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.469982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.470158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.470178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.470281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.470301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.470541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.470560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.470638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.470658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.470820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.470840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.470934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.470960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.611 [2024-11-19 17:45:00.471123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.611 [2024-11-19 17:45:00.471143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.611 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.471242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.471262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.471424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.471444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.471543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.471563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.471736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.471755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.471917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.471937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.472124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.472284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.472304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.472471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.472490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.472654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.472673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.472835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.472855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.472974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.472996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.473081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.473100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.473339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.473360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.473532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.473555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.473636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.473655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.473734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.473754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.473909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.473930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.474094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.474115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.474208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.474227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.474404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.474424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.474573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.474593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.474685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.474705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.474867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.474887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.612 [2024-11-19 17:45:00.475951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.612 [2024-11-19 17:45:00.475972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.612 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.476081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.476101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.476313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.476333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.476495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.476515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.476670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.476691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.476776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.476795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.477006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.477184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.477204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.477389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.477419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.477638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.477669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.477915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.477946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.478142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.478177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.478349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.478379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.478546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.478566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.478757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.478920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.478961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.479199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.479230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.479497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.479517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.479699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.479719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.479903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.479923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.480023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.480043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.480210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.480230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.480408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.480440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.480568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.480600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.480784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.480816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.480925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.480965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.481095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.481126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.481298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.481329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.481496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.481515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.481710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.481742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.481930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.481980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.482172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.482204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.613 [2024-11-19 17:45:00.482387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.613 [2024-11-19 17:45:00.482407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.613 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.482619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.482639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.482799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.482819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.483062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.483094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.483269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.483300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.483438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.483468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.483650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.483673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.483825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.483845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.484056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.484089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.484256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.484287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.484462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.484481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.484686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.484705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.484786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.484805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.484981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.485927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.485952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.486118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.486139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.486295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.486314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.486389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.486408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.486554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.486623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.486782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.486851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.487083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.487105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.487259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.487279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.487449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.487470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.487720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.487751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.487942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.487982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.488157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.488189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.488422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.488464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.488776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.488810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.489062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.614 [2024-11-19 17:45:00.489097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.614 qpair failed and we were unable to recover it. 00:26:58.614 [2024-11-19 17:45:00.489284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.489307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.489496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.489516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.489615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.489635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.489778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.489797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.489898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.489918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.490098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.490132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.490237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.490268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.490443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.490473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.490645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.490677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.490854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.490873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.491098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.491119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.491320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.491340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.491570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.491606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.491820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.491852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.491981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.492016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.492238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.492270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.492397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.492428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.492551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.492582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.492773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.492796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.493171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.493407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.493533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.493643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.493807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.493996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.494016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.494207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.494228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.494420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.494549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.494570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.494755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.494774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.495006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.615 [2024-11-19 17:45:00.495028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.615 qpair failed and we were unable to recover it. 00:26:58.615 [2024-11-19 17:45:00.495190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.495396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.495416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.495531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.495551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.495659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.495679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.495841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.495861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.495936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.495960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.496174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.496194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.496293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.496312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.496434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.496471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.496587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.496618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.496740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.496770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.496945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.496981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.497916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.497934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.498099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.498120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.498209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.498228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.498462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.498481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.498577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.498597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.498699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.498719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.498885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.498904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.499157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.499191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.499378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.499408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.499597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.499629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.499802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.499822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.499933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.499958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.500116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.500137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.500285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.500305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.500398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.500420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.616 qpair failed and we were unable to recover it. 00:26:58.616 [2024-11-19 17:45:00.500569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.616 [2024-11-19 17:45:00.500589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.500800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.500819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.500991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.501012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.501178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.501219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.501393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.501425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.501546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.501576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.501743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.501776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.501900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.501932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.502066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.502098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.502267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.502309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.502403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.502423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.502579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.502599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.502711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.502732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.502894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.502913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.503146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.503167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.503282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.503303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.503460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.503480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.503568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.503587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.503801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.503821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.504900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.617 [2024-11-19 17:45:00.504921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.617 qpair failed and we were unable to recover it. 00:26:58.617 [2024-11-19 17:45:00.505021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.505045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.505199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.505220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.505314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.505495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.505518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.505599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.505621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.505781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.505801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.506835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.506854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.507966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.507987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.508091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.508111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.508219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.508238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.508382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.508402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.508553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.508573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.508687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.508706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.508889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.508908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.509073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.509094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.509185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.509205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.509379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.509399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.509505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.509525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.618 qpair failed and we were unable to recover it. 00:26:58.618 [2024-11-19 17:45:00.509679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.618 [2024-11-19 17:45:00.509703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.509937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.509962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.510131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.510150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.510317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.510348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.510484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.510516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.510696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.510726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.510846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.510877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.510970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.510991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.511221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.511292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.511436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.511472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.511640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.511684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.511869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.511901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.512110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.512143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.512265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.512296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.512545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.512577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.512762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.512794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.513030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.513064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.513325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.513356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.513468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.513488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.513596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.513615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.513763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.513874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.513893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.514889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.514909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.515010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.515030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.515123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.619 [2024-11-19 17:45:00.515291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.619 [2024-11-19 17:45:00.515310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.619 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.515398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.515419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.515511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.515531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.515693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.515713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.515934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.515959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.516925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.516945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.517162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.517335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.517532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.517649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.517775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.517885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.517977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.518953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.518974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.519128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.519149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.519314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.519334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.519413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.519433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.519599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.519619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.519835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.519867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.620 [2024-11-19 17:45:00.519972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.620 [2024-11-19 17:45:00.520005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.620 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.520145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.520176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.520293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.520324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.520491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.520523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.520688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.520707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.520793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.520813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.520891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.520915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.521944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.521973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.522076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.522096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.522179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.522341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.522362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.522510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.522530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.522681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.522718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.522904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.522937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.523130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.523163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.523294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.523326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.523510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.523540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.523721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.523754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.523866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.523886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.523969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.523990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.524080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.524101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.524181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.524201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.524361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.621 [2024-11-19 17:45:00.524381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.621 qpair failed and we were unable to recover it. 00:26:58.621 [2024-11-19 17:45:00.524472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.524492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.524574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.524595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.524760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.524780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.524926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.524967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.525965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.526122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.526142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.526244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.526398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.526418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.526580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.526612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.526719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.526751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.526858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.526890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.527078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.527111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.527358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.527390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.527638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.527670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.527853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.527873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.528907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.528927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.529035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.529056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.529227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.529248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.622 qpair failed and we were unable to recover it. 00:26:58.622 [2024-11-19 17:45:00.529347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.622 [2024-11-19 17:45:00.529366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.529578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.529603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.529684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.529704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.529795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.529815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.529972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.529993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.530152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.530172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.530318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.530338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.530525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.530545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.530633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.530653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.530751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.530772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.530922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.530942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.531851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.531994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.532015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.532183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.532204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.532284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.532305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.532399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.532419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.532577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.532597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.532760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.532780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.532998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.533020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.533125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.533145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.533243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.623 [2024-11-19 17:45:00.533262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.623 qpair failed and we were unable to recover it. 00:26:58.623 [2024-11-19 17:45:00.533432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.533463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.533642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.533673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.533814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.533847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.534058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.534090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.534223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.534255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.534442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.534473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.534639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.534660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.534803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.534847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.534966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.534998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.535122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.535153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.535329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.535361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.535472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.535491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.535706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.535727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.535817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.535840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.536006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.536028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.536183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.536252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.536512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.536548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.536739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.536774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.537021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.537056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.537247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.537279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.537543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.537576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.537877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.537909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.538057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.538091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.538231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.538264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.538376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.538399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.538554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.538575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.538744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.538763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.538930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.538955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.539174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.539194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.539344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.539365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.624 qpair failed and we were unable to recover it. 00:26:58.624 [2024-11-19 17:45:00.539531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.624 [2024-11-19 17:45:00.539551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.539744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.539777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.539995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.540028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.540209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.540242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.540418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.540451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.540584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.540615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.540876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.540985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.541006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.541222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.541243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.541403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.541423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.541526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.541547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.541648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.541668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.541849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.541884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.542084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.542118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.542363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.542396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.542579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.542610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.542858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.542890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.543093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.543244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.543268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.543369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.543390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.543529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.543549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.543762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.543783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.543909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.544063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.544084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.544231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.544251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.544479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.544500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.544718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.544738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.544848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.544869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.545053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.545075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.545260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.545280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.625 qpair failed and we were unable to recover it. 00:26:58.625 [2024-11-19 17:45:00.545382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.625 [2024-11-19 17:45:00.545402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.545510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.545622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.545641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.545740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.545760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.545908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.545928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.546194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.546237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.546342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.546373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.546506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.546538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.546659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.546690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.546804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.546844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.547052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.547085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.547224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.547256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.547440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.547474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.547591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.547623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.547727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.547758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.548149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.548180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.548433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.548466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.548674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.548834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.548866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.548994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.549015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.549159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.549180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.549326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.549346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.549501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.549522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.549679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.549699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.549939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.549966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.550130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.550150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.550311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.550332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.550424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.550444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.550554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.550574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.550676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.550696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.550926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.550951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.551181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.626 [2024-11-19 17:45:00.551211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.626 qpair failed and we were unable to recover it. 00:26:58.626 [2024-11-19 17:45:00.551388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.551421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.551600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.551630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.551753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.551773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.551971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.551995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.552082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.552101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.552208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.552227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.552383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.552404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.552566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.552586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.552822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.552842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.553000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.553021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.553108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.553127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.553220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.553240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.553449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.553470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.553666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.553685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.553841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.553862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.554982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.555147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.555179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.555298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.555328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.555499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.555534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.555801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.555832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.555968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.556002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.556187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.556211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.556310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.556330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.556539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.556558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.627 [2024-11-19 17:45:00.556709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.627 [2024-11-19 17:45:00.556732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.627 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.556820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.556839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.556988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.557008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.557249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.557268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.557432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.557452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.557622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.557652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.557919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.557958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.558151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.558182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.558342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.558373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.558550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.558580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.558696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.558728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.558828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.558858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.559102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.559135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.559335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.559366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.559548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.559580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.559744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.559780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.559966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.559987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.560131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.560151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.560251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.560271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.560419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.560439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.560655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.560675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.560895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.560926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.561146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.561179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.561306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.561336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.561502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.561523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.561616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.561635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.561781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.628 [2024-11-19 17:45:00.561802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.628 qpair failed and we were unable to recover it. 00:26:58.628 [2024-11-19 17:45:00.561955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.561979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.562193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.562213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.562333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.562363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.562500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.562531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.562705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.562736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.563936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.563961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.564102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.564134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.564305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.564337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.564577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.564608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.564726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.564759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.564861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.564891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.565090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.565123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.565314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.565345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.565602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.565634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.565869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.565910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.566013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.566034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.566197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.566217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.566381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.566401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.566513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.566533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.566745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.566765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.566906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.566926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.567013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.567034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.567187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.567206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.567373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.567392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.567485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.567505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.567748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.629 [2024-11-19 17:45:00.567769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-11-19 17:45:00.567914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.567934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.568119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.568306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.568425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.568622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.568734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.568828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.568989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.569010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.569254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.569402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.569423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.569570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.569590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.569681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.569700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.569923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.569943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.570923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.570942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.571048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.571068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.571236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.571256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.571415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.571435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.571662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.571683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.571840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.572062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.572094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.572212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.572244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.572425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.572456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.572719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.572740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.572963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.572985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.573142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.573162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.573273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.630 [2024-11-19 17:45:00.573292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-11-19 17:45:00.573383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.573402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.573557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.573576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.573723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.573743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.573891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.573934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.574053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.574090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.574327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.574358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.574532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.574563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.574802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.574833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.575065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.575086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.575323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.575343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.575517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.575537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.575778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.575809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.575936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.575979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.576107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.576137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.576254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.576286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.576408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.576438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.576562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.576594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.576765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.576795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.576982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.577016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.577149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.577179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.577305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.577337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.577457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.577489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.577682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.577714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.577921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.577941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.578115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.578135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.578285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.578305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.578466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.578486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.578581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.578601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.578707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.578727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.578874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.578895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.579006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.631 [2024-11-19 17:45:00.579030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-11-19 17:45:00.579190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.579218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.579467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.579487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.579574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.579593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.579738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.579758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.579841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.579860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.579957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.579977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.580147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.580167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.580317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.580337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.580435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.580456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.580693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.580713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.580956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.581901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.581921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.582939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.582974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.583186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.583369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.583479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.583659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.583767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.583873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.632 qpair failed and we were unable to recover it. 00:26:58.632 [2024-11-19 17:45:00.583983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.632 [2024-11-19 17:45:00.584004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.584099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.584119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.584214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.584233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.584387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.584418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.584587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.584619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.584805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.584837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.584987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.585101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.585332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.585449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.585558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.585723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.585898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.585919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.586938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.587963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.587985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.588130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.588151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.588365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.588402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.588573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.588605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.588731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.588763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.588864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.588896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.589018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.633 [2024-11-19 17:45:00.589039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.633 qpair failed and we were unable to recover it. 00:26:58.633 [2024-11-19 17:45:00.589189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.589209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.589304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.589325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.589405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.589427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.589504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.589523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.589769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.589789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.589908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.589929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.590904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.590924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.591877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.591898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.592030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.592062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.592212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.592325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.592357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.592540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.592571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.592738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.592758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.592837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.592858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.593027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.593048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.634 qpair failed and we were unable to recover it. 00:26:58.634 [2024-11-19 17:45:00.593129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.634 [2024-11-19 17:45:00.593148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.593385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.593405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.593647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.593667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.593828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.593848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.594921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.595811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.595982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.596003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.596114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.596134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.596218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.596238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.596397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.596417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.596574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.596607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.596780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.596812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.597050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.597085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.597204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.597224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.597318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.597338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.597499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.597519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.597687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.597730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.597904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.598066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.598105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.598221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.598260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.598363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.598394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.598525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.598559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.598752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.598772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.598921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.598942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.635 qpair failed and we were unable to recover it. 00:26:58.635 [2024-11-19 17:45:00.599145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.635 [2024-11-19 17:45:00.599166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.599308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.599329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.599481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.599501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.599663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.599695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.599818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.599849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.600802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.600823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.601924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.601944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.602062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.602083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.602167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.602188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.602338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.602374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.602497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.602530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.602797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.602827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.603959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.603980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.604083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.604102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.604384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.604404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.636 [2024-11-19 17:45:00.604550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.636 [2024-11-19 17:45:00.604570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.636 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.604647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.604667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.604933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.604968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.605855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.605893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.606073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.606107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.606281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.606312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.606427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.606459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.606583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.606615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.606735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.606766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.606977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.606998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.607161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.607182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.607277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.607297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.607378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.607397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.607548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.607878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.607970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.608166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.608188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.608283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.608304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.608528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.608548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.608698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.608719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.608876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.608897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.608998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.609103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.609214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.609313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.609432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.609691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.637 [2024-11-19 17:45:00.609807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.637 [2024-11-19 17:45:00.609828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.637 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.609998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.610166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.610186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.610266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.610285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.610393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.610413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.610649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.610669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.610820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.610856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.610992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.611024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.611221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.611253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.611460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.611492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.611633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.611653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.611809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.611830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.611986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.612960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.612981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.613213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.613234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.613334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.613354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.613513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.613534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.613648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.613723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.613745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.613836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.613862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.614014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.614034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.614245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.614286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.614470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.614502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.614639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.614671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.614768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.614790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.614910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.614931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.615117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.615150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.638 [2024-11-19 17:45:00.615340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.638 [2024-11-19 17:45:00.615372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.638 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.615555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.615586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.615747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.615768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.615909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.615929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.616097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.616119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.616305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.616325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.616567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.616599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.616709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.616745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.616832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.616852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.616960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.616981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.617133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.617153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.617321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.617341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.617441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.617462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.617672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.617692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.617843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.617863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.617978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.618154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.618276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.618400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.618583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.618712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.618830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.618989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.619010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.619102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.639 [2024-11-19 17:45:00.619133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.639 qpair failed and we were unable to recover it. 00:26:58.639 [2024-11-19 17:45:00.619329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.619350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.619441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.619460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.619548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.619568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.619671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.619691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.619843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.619863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.619944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.619969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.620208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.620229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.620377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.620397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.620548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.620568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.620716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.620736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.620930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.620969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.621092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.621194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.621453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.621581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.621761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.621871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.621967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.622001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.622097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.622117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.622231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.622251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.622474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.622505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.622766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.622798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.623069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.623103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.623213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.623243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.623494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.623679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.623710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.623956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.623989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.624113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.624155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.624316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.624337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.624427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.624447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.624594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.624614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.624770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.624790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.640 [2024-11-19 17:45:00.624932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.640 [2024-11-19 17:45:00.625004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.640 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.625136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.625168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.625410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.625442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.625582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.625613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.625721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.625753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.625961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.625983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.626082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.626102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.626340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.626360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.626520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.626540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.626644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.626664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.626854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.626874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.627021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.627042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.627209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.627229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.627481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.627513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.627642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.627674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.627794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.627813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.627974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.627994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.628146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.628166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.628322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.628342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.628509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.628529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.628681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.628701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.628850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.628870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.629139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.629160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.629239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.629257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.629473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.629494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.629606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.629626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.629726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.629746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.629830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.629850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.630074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.630241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.630261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.630422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.630445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.630590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.630610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.630783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.630803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.641 [2024-11-19 17:45:00.630875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.641 [2024-11-19 17:45:00.630893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.641 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.631828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.631852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.632043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.632163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.632338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.632438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.632641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.632825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.632991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.633166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.633337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.633517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.633629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.633807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.633921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.633940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.634037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.634057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.634145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.634313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.634333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.634575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.634598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.634814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.634834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.635000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.635021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.635176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.635196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.635413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.635433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.635514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.635534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.635697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.642 [2024-11-19 17:45:00.635716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.642 qpair failed and we were unable to recover it. 00:26:58.642 [2024-11-19 17:45:00.635804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.635825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.635906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.635926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.636169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.636190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.636413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.636433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.636667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.636687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.636787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.636807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.637033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.637054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.637146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.637166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.637313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.637334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.637499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.637520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.637742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.637762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.637937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.637961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.638043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.638064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.638158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.638178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.638258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.638278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.638441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.638461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.638558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.638578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.638819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.638839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.639941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.639965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.640051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.640071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.640247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.640268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.640345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.640363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.640518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.640631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.640650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.640886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.643 [2024-11-19 17:45:00.640906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.643 qpair failed and we were unable to recover it. 00:26:58.643 [2024-11-19 17:45:00.641057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.641078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.641318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.641339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.641406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.641425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.641505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.641524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.641674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.641694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.641854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.641875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.642028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.642048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.642140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.642161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.642397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.642417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.642517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.642537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.642689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.642709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.642870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.642891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.643052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.643073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.643256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.643276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.643536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.643556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.643662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.643780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.643799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.644038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.644060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.644202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.644222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.644373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.644393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.644484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.644504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.644715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.644736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.645918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.644 [2024-11-19 17:45:00.645937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.644 qpair failed and we were unable to recover it. 00:26:58.644 [2024-11-19 17:45:00.646087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.646157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.646341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.646411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.646552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.646682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.646704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.646875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.646895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.647074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.647304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.647408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.647583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.647707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.647880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.647990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.648109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.648219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.648331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.648438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.648815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.648834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.645 [2024-11-19 17:45:00.649868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.645 [2024-11-19 17:45:00.649888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.645 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.649966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.649986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.650216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.650236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.650394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.650417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.650602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.650622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.650776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.650795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.650962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.650983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.651069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.651089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.651252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.651273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.651381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.651400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.651621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.651641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.651847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.651867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.652970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.652989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.653908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.654110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.654131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.654228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.654248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.654350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.654369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.654542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.654562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.654657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.654681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.654896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.654915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.655012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.655033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.646 [2024-11-19 17:45:00.655129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.646 [2024-11-19 17:45:00.655149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.646 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.655361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.655382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.655584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.655604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.655714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.655734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.655972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.655993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.656209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.656230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.656387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.656407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.656494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.656515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.656684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.656705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.656813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.656834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.657935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.657962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.658960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.658981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.659062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.659085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.659332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.659352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.659463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.659483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.659572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.659592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.659669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.659689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.647 [2024-11-19 17:45:00.659781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.647 [2024-11-19 17:45:00.659801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.647 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.659889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.659908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.660841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.660994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.661197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.661234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.661365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.661397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.661578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.661610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.661876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.661907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.662049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.662083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.662182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.662205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.662377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.662397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.662611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.662630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.662708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.662728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.662875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.662894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.663868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.663888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.648 [2024-11-19 17:45:00.664923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.648 [2024-11-19 17:45:00.664943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.648 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.665043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.665063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.665227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.665247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.665430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.665465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.665674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.665715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.665918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.665960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.666954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.666975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.667951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.667972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.668119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.668140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.668227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.668248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.668430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.668450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.668606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.668627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.668714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.668734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.668898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.668919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.669013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.669036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.669271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.669293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.669389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.669410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.669605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.669624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.669787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.669807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.669898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.669919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.670023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.670055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.649 [2024-11-19 17:45:00.670134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.649 [2024-11-19 17:45:00.670155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.649 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.670233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.670252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.670405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.670425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.670514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.670534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.670633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.670653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.670818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.670837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.670937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.670983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.671881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.671901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.672829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.672998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.673116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.673217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.673405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.673588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.673700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.673803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.673824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.674063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.650 [2024-11-19 17:45:00.674084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.650 qpair failed and we were unable to recover it. 00:26:58.650 [2024-11-19 17:45:00.674164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.674183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.674356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.674376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.674455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.674475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.674586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.674606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.674760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.674779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.675038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.675151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.675320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.675501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.675671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.675988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.676089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.676261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.676440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.676552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.676742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.676927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.676954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.677102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.677122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.677268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.677287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.677450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.677472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.677618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.677646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.677823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.677843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.677937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.677971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.678057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.678078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.678248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.678268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.678426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.678446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.678530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.678734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.678754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.678942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.678969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.679130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.679150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.679229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.679249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.679347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.679368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.679459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.679479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.651 [2024-11-19 17:45:00.679644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.651 [2024-11-19 17:45:00.679664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.651 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.679822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.679842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.679935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.679962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.680140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.680161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.680311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.680331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.680409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.680429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.680505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.680526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.680688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.680709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.680823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.681002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.681024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.681258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.681278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.681490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.681511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.681725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.681746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.681826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.681845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.681939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.681987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.682143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.682164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.682323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.682343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.682435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.682454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.682614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.682634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.682781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.682801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.682958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.682979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.683124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.683145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.683314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.683333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.683495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.683516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.683675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.683695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.683886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.684033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.684154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.684268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.684368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.684484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.652 [2024-11-19 17:45:00.684652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.652 qpair failed and we were unable to recover it. 00:26:58.652 [2024-11-19 17:45:00.684743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.684763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.684863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.684884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.685898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.685982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.686003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.686158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.686178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.686327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.686347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.686428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.686449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.686643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.686663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.686918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.686938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.687123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.687143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.687308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.687328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.687489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.687509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.687604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.687623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.687772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.687793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.687939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.687965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.688040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.688060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.688269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.688288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.688517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.688541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.688639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.688658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.688849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.688869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.689076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.689099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.689254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.689274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.689438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.689458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.689616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.689636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.689863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.689883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.689978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.690001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.690226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.690247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.690355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.690536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.690556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.690720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.690741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.690957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.690978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.691139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.653 [2024-11-19 17:45:00.691160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.653 qpair failed and we were unable to recover it. 00:26:58.653 [2024-11-19 17:45:00.691249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.691269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.691426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.691445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.691611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.691632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.691723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.691744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.691886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.691905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.692052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.692074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.692233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.692253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.692485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.692505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.692661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.692681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.692780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.692800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.692890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.692911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.693078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.693099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.693337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.693361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.693551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.693572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.693669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.693690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.693781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.693801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.693982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.694151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.694331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.694497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.694677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.694793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.694926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.694952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.695941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.695968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.696158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.696178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.696324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.696343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.696519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.696539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.696621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.696641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.696814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.696834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.696982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.697004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.697118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.697138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.697355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.697375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.697530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.697559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.697634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.697657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.697752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.654 [2024-11-19 17:45:00.697773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.654 qpair failed and we were unable to recover it. 00:26:58.654 [2024-11-19 17:45:00.697917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.697937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.698126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.698146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.698306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.698327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.698490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.698510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.698603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.698623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.698712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.698732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.698811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.699003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.699024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.699113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.699134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.699295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.699490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.699510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.699657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.699678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.699872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.699892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.700054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.700075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.700220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.700241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.700391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.700411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.700559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.700579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.700744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.700763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.700953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.700974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.701070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.701090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.701259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.701279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.701500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.701520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.701676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.701696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.701846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.701866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.702100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.702122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.702287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.702307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.702401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.702421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.702611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.702631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.702848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.702868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.702965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.702986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.703192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.703361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.703381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.703474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.703494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.703671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.703691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.703785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.655 [2024-11-19 17:45:00.703805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.655 qpair failed and we were unable to recover it. 00:26:58.655 [2024-11-19 17:45:00.703986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.704007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.704096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.704116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.704295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.704315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.704501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.704522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.704672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.704692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.704847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.704868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.705036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.705057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.705209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.705229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.705387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.705407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.705568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.705588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.705757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.705777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.705854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.705874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.706971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.706992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.707139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.707159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.707372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.707391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.707514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.707533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.707689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.707709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.707933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.707966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.708060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.708079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.708173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.708193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.708302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.708322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.708535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.708554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.708644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.708663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.708904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.708924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.709037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.709143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.709374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.709498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.709602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.709851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.709997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.710018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.656 [2024-11-19 17:45:00.710121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.656 [2024-11-19 17:45:00.710142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.656 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.710220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.710240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.710392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.710412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.710513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.710533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.710636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.710656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.710732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.710754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.710911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.710931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.711960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.711981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.712072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.712092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.712330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.712362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.712545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.712576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.712702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.712734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.712915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.712936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.713034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.713055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.713236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.713271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.713461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.713493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.713779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.713809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.713980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.714013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.714275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.714306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.714500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.714531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.714654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.714685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.714959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.714992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.715195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.715215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.715368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.715389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.715537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.715568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.715677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.715708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.715945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.715986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.716105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.716136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.716257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.716288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.716455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.716475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.716570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.716590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.716781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.716801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.716895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.716915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.657 [2024-11-19 17:45:00.717082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.657 [2024-11-19 17:45:00.717103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.657 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.717259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.717279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.717377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.717397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.717488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.717508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.717674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.717714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.717894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.717925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.718141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.718174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.718345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.718365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.718542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.718562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.718742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.718773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.718913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.718958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.719053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.719073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.719170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.719190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.719431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.719451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.719545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.719565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.719657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.719677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.719777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.719797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.720047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.720069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.720220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.720240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.720477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.720508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.720682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.720712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.720890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.720910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.721073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.721093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.721275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.721307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.721431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.721461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.721573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.721605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.721714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.721744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.721864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.721896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.722112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.722145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.722385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.722416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.722585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.722616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.722832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.722864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.722990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.723010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.723185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.723206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.723394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.723424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.723618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.723650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.723821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.723852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.723972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.723993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.724093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.724112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.724286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.724305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.724413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.724434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.658 [2024-11-19 17:45:00.724521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.658 [2024-11-19 17:45:00.724541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.658 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.724637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.724657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.724881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.724913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.725061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.725093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.725219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.725250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.725364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.725412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.725586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.725616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.725803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.725834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.725971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.726028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.726208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.726240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.726360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.726381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.726470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.726490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.726656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.726852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.726886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.727018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.727049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.727259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.727290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.727389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.727420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.727592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.727622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.727799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.727830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.727966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.727999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.728187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.728220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.728396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.728427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.728690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.728722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.728856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.728887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.729017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.729049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.729149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.729180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.729370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.729401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.729643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.729675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.729856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.729886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.730082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.730123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.730229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.730260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.730388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.730418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.730603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.730759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.730790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.730906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.730938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.731064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.731101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.731288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.731320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.659 [2024-11-19 17:45:00.731518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.659 [2024-11-19 17:45:00.731550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.659 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.731720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.731751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.731876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.731907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.732041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.732062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.732228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.732248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.732401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.732432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.732556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.732587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.732685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.732716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.732883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.733930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.733957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.734069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.734090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.734267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.734299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.734475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.734507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.734761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.734791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.734904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.734936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.735063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.735094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.735314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.735335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.735492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.735523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.735720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.735751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.736004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.736043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.736227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.736248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.736516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.736548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.736652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.736683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.736899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.737018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-19 17:45:00.737051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.660 qpair failed and we were unable to recover it. 00:26:58.660 [2024-11-19 17:45:00.737169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.737210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.737393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.737414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.737575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.737595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.737757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.737787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.737892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.737922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.738191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.738334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.738449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.738616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.738716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.738888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.738999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.739830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.739995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.740029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.740148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.740181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.740434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.740466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.740749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.740781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.741077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.741098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.741257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.741278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.741474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.741506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.741634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.741664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.741930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.742249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.742282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.742575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.742606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.742733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.742765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.743012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.743045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.661 [2024-11-19 17:45:00.743171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-19 17:45:00.743210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.661 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.743320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.743339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.743573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.743605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.743720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.743753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.743899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.743930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.744119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.744151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.744262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.744294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.744551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.744583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.744761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.744793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.744989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.745021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.745157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.745188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.745389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.745422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.745624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.745657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.745896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.745929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.746133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.746166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.746427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.746460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.746742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.746773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.746976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.747009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.747256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.747277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.747469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.747489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.747681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.747701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.747992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.748026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.748266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.748298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.748410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.748430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.748689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.748721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.749008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.749273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.749305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.749532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.749562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.749799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.749831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.750068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.750102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.750339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.750360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.750532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-19 17:45:00.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.662 qpair failed and we were unable to recover it. 00:26:58.662 [2024-11-19 17:45:00.750816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.750848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.751061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.751082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.751294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.751314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.751527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.751708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.751739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.751923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.751961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.752087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.752118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.752311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.752341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.752543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.752574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.752833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.752865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.753158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.753192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.753405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.753437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.753695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.753727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.753914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.753945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.754161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.754370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.754391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.754554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.754818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.754848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.755034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.755067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.755308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.755340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.755538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.755558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.755723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.755743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.755989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.756022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.756278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.756298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.756474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.756505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.756689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.756720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.756840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.756877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.757081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.757103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.757207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.757227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.757417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.757449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.757688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.757720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.757927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.757967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.758178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.758292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.758312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.758413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.663 [2024-11-19 17:45:00.758433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.663 qpair failed and we were unable to recover it. 00:26:58.663 [2024-11-19 17:45:00.758608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.758628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.758761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.758781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.758960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.758981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.759077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.759097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.759252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.759271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.759430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.759451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.759539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.759578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.759749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.759780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.760056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.760209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.760327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.760496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.760686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.760846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.760993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.761027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.761243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.761276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.761409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.761440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.761621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.761653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.761843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.761876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.762051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.762072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.762168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.762188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.762284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.762305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.762520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.762553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.762794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.762826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.763004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.763025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.763186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.763218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.763346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.763378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.763565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.763596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.763775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.763807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.763990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.764023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.764196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.764228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.764353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.764374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.764593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.764624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.764819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.764851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.765037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.664 [2024-11-19 17:45:00.765070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.664 qpair failed and we were unable to recover it. 00:26:58.664 [2024-11-19 17:45:00.765199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.765219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.765396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.765427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.765620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.765652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.765772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.765803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.765921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.765965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.766127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.766148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.766312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.766332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.766447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.766478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.766678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.766710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.766879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.766911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.767161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.767182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.767277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.767299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.767444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.767464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.767644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.767664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.767916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.767936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.768160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.768181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.768294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.768314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.768553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.768573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.768662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.768683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.768908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.768939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.769089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.769121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.769239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.769270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.769508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.769548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.769732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.665 [2024-11-19 17:45:00.769752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.665 qpair failed and we were unable to recover it. 00:26:58.665 [2024-11-19 17:45:00.769851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.769875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.770921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.770941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.771124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.771144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.771294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.771315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.771528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.771548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.771714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.771734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.771974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.772008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.772199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.772230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.772456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.772557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.772577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.772794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.772826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.772935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.772975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.773112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.773145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.773381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.773413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.773604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.773636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.773772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.773804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.773920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.773964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.774138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.774170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.774269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.774301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.774572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.774604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.774776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.774808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.774923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.774973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.775192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.775212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.775379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.775400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.775650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.775670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.775774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.775795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.776026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.776048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.776289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.666 [2024-11-19 17:45:00.776309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.666 qpair failed and we were unable to recover it. 00:26:58.666 [2024-11-19 17:45:00.776467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.776488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.776729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.776750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.777001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.777023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.777304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.777336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.777512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.777544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.777810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.777842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.778084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.778117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.778360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.778393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.778643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.778663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.778896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.778917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.779136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.779157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.779304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.779325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.779575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.779607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.779840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.779873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.780077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.780110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.780348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.780380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.780548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.780580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.780841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.781081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.781102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.781313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.781333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.781598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.781621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.781822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.781852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.782048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.782081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.782339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.782370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.782614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.782645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.782909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.782942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.783155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.783186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.783442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.783462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.783620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.783651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.783910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.667 [2024-11-19 17:45:00.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.667 qpair failed and we were unable to recover it. 00:26:58.667 [2024-11-19 17:45:00.784057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.784347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.784368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.784582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.784602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.784765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.784784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.785008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.785042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.785221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.785253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.785491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.785521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.785757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.785788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.786071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.786112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.786329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.786514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.786534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.786713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.786733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.786978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.786999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.787118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.787149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.787275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.787307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.787415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.787447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.787694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.787725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.787902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.787934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.788192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.788213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.788394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.788426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.788664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.788695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.788864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.788895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.789167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.789442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.789473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.789651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.789671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.789911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.789943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.790137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.790169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.790356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.790396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.790578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.790598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.790773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.790792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.790986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.791020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.791263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.791295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.791477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.668 [2024-11-19 17:45:00.791497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.668 qpair failed and we were unable to recover it. 00:26:58.668 [2024-11-19 17:45:00.791738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.791770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.792008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.792041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.792281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.792321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.792540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.792560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.792731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.792751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.792932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.792958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.793157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.793188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.793385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.793417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.793541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.793747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.793957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.793990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.794293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.794314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.794553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.794574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.794816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.794848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.795088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.795122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.795262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.795294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.795488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.795509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.795728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.795748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.795961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.795982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.796167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.796188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.796432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.796463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.796683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.796715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.796998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.797041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.797256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.797276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.797424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.669 [2024-11-19 17:45:00.797444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.669 qpair failed and we were unable to recover it. 00:26:58.669 [2024-11-19 17:45:00.797655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.951 [2024-11-19 17:45:00.797692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.951 qpair failed and we were unable to recover it. 00:26:58.951 [2024-11-19 17:45:00.797914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.951 [2024-11-19 17:45:00.797957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.951 qpair failed and we were unable to recover it. 00:26:58.951 [2024-11-19 17:45:00.798190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.951 [2024-11-19 17:45:00.798224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.951 qpair failed and we were unable to recover it. 00:26:58.951 [2024-11-19 17:45:00.798497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.951 [2024-11-19 17:45:00.798528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.951 qpair failed and we were unable to recover it. 00:26:58.951 [2024-11-19 17:45:00.798809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.951 [2024-11-19 17:45:00.798841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.951 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.799130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.799173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.799304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.799460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.799480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.799637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.799765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.799785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.799964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.799997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.800261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.800294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.800553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.800586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.800773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.800805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.801105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.801139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.801275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.801312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.801468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.801488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.801595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.801615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.801828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.801849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.802003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.802025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.802311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.802332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.802443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.802464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.802702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.802722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.802884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.802905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.803143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.803177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.803441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.803474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.803712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.803745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.803964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.804003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.804149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.804182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.804391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.804424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.804693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.804726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.804980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.805002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.805217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.805238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.805509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.805542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.805731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.805762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.806024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.806065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.806343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.806365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.806575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.806595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.806806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.806827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.807014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.807038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.807225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.807246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.807373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.952 [2024-11-19 17:45:00.807395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.952 qpair failed and we were unable to recover it. 00:26:58.952 [2024-11-19 17:45:00.807645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.807678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.807967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.808000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.808219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.808252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.808447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.808479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.808744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.808766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.808927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.808999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.809192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.809225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.809362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.809394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.809517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.809538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.809700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.809744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.810005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.810040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.810282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.810321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.810537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.810559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.810754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.810775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.811011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.811034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.811151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.811183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.811315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.811348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.811472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.811504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.811769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.811801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.811995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.812029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.812286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.812318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.812491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.812523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.812722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.812755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.813040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.813074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.813344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.813377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.813666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.813687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.813916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.813937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.814108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.814129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.814323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.814344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.814549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.814581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.814753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.814784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.815035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.815069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.815203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.815223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.815402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.815437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.815655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.815686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.815955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.815989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.816206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.816239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.816479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.953 [2024-11-19 17:45:00.816510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.953 qpair failed and we were unable to recover it. 00:26:58.953 [2024-11-19 17:45:00.816747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.816768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.816932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.816966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.817121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.817142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.817351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.817373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.817600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.817621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.817771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.817792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.817984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.818006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.818172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.818205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.818415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.818447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.818656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.818689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.818964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.818998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.819302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.819467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.819488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.819728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.819760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.819882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.819915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.820058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.820096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.820375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.820408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.820652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.820684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.820807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.820838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.821128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.821163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.821362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.821405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.821637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.821657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.821918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.821965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.822254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.822286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.822474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.822507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.822688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.822709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.822875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.822908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.823142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.823175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.823450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.823669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.823690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.823909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.823942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.824151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.824184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.824447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.824480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.824744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.824766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.824866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.824887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.825153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.825175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.825424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.825457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.825701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.825734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.954 [2024-11-19 17:45:00.825977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.954 [2024-11-19 17:45:00.826011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.954 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.826152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.826185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.826324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.826356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.826617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.826638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.826854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.826879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.826980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.827002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.827112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.827134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.827304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.827325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.827489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.827522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.827710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.827742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.827928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.827969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.828242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.828274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.828464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.828485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.828577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.828616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.828859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.828880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.829030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.829052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.829284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.829316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.829434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.829467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.829606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.829637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.829745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.829777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.830016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.830050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.830232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.830252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.830355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.830376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.830570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.830590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.830705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.830725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.830964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.830999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.831102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.831123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.831221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.831241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.831336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.831357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.831444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.831466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.831706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.831738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.831940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.831990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.832236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.832276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.832438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.832458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.832627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.832648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.832893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.832925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.833110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.833142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.833277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.833308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.833583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.955 [2024-11-19 17:45:00.833615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.955 qpair failed and we were unable to recover it. 00:26:58.955 [2024-11-19 17:45:00.833798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.833830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.834007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.834040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.834306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.834338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.834532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.834564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.834769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.834790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.835012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.835034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.835253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.835274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.835462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.835483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.835706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.835726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.835919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.835940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.836124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.836157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.836336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.836367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.836506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.836538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.836722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.836754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.836883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.836915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.837120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.837154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.837267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.837289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.837401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.837422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.837533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.837554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.837717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.837739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.837856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.837877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.838033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.838055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.838224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.838245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.838399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.838420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.838669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.838691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.838785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.838807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.839007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.839029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.839118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.839139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.839304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.839324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.839492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.839525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.839630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.839670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.839893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.839928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.840061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.956 [2024-11-19 17:45:00.840095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.956 qpair failed and we were unable to recover it. 00:26:58.956 [2024-11-19 17:45:00.840228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.840272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.840381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.840414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.840541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.840574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.840767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.840787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.840873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.840895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.841007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.841029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.841131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.841153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.841309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.841331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.841427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.841448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.841668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.841690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.841858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.841890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.842910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.842931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.843127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.843149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.843226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.843247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.843413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.843453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.843649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.843680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.843925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.843966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.844154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.844187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.844368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.844401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.844594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.844626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.844742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.844773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.844966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.845006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.845192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.845224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.845337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.845357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.845599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.845631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.845748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.845779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.845899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.845932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.846075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.846106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.846323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.846354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.846539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.846570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.846712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.846745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.846873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.846904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.957 qpair failed and we were unable to recover it. 00:26:58.957 [2024-11-19 17:45:00.847198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.957 [2024-11-19 17:45:00.847232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.847409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.847450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.847644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.847663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.847763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.847784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.847931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.847958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.848212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.848232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.848408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.848428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.848530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.848560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.848807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.848838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.848966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.849000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.849188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.849219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.849345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.849376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.849563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.849594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.849702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.849733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.849903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.849934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.850064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.850096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.850202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.850238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.850483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.850502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.850667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.850687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.850881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.850912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.851044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.851077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.851317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.851349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.851447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.851477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.851662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.851694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.851879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.851910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.852099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.852132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.852256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.852287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.852490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.852521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.852730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.852762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.852956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.852990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.853120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.853152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.853417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.853449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.853722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.853742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.853850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.853870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.854017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.854039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.854237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.854459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.854490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.854742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.854773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.958 [2024-11-19 17:45:00.854980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.958 [2024-11-19 17:45:00.855012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.958 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.855134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.855165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.855406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.855438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.855649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.855669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.855865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.855896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.856094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.856380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.856411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.856645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.856665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.856763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.856783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.856957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.856978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.857061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.857081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.857250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.857281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.857465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.857496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.857763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.857795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.858030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.858063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.858247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.858278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.858456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.858475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.858711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.858731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.858828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.858848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.859004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.859043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.859154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.859185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.859416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.859447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.859657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.859677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.859900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.859920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.860096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.860117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.860356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.860388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.860640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.860931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.860970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.861488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.861509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.861681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.861701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.861916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.861958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.862154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.862186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.862353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.862544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.862576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.862757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.862789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.862976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.863009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.863196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.863228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.959 qpair failed and we were unable to recover it. 00:26:58.959 [2024-11-19 17:45:00.863431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.959 [2024-11-19 17:45:00.863463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.863727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.863747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.864010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.864031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.864194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.864214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.864318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.864338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.864576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.864596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.864786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.864806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.865088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.865109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.865380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.865403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.865634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.865654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.865896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.865916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.866090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.866111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.866292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.866312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.866553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.866585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.866790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.866822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.867072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.867105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.867392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.867424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.867625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.867657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.867895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.867925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.868069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.868102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.868364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.868395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.868570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.868601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.868820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.868853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.869039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.869072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.869264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.869295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.869533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.869564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.869775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.869807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.870063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.870095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.870275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.870296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.870480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.870522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.870710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.870742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.870934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.870978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.871153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.871185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.871439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.871471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.871760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.871792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.871999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.872038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.872221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.872253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.872501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.960 [2024-11-19 17:45:00.872533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.960 qpair failed and we were unable to recover it. 00:26:58.960 [2024-11-19 17:45:00.872743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.872763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.873005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.873038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.873212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.873244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.873480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.873511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.873773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.873805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.874094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.874128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.874324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.874366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.874647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.874678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.874865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.874898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.875181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.875214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.875410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.875430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.875666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.875698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.875943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.875989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.876271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.876303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.876579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.876610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.876814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.877043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.877077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.877261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.877293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.877502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.877523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.877709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.877740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.877919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.877960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.878245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.878416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.878448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.878688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.878720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.878967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.879005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.879255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.879287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.879583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.879615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.879879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.879911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.880193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.880226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.961 qpair failed and we were unable to recover it. 00:26:58.961 [2024-11-19 17:45:00.880412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.961 [2024-11-19 17:45:00.880455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.880718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.880738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.880848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.880868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.881109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.881130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.881319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.881340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.881580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.881600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.881781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.881801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.881980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.882012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.882282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.882314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.882502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.882534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.882771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.882791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.882970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.882991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.883213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.883246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.883540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.883571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.883836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.883856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.884083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.884105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.884255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.884275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.884440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.884460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.884652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.884690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.884903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.884934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.885158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.885191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.885453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.885485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.885766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.885786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.886054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.886076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.886291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.886311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.886521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.886541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.886732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.886752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.887014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.887047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.887261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.887293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.887490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.887521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.887758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.887778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.888006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.888027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.888196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.888216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.888407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.888439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.888792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.889052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.889086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.889231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.889268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.889461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.889492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.962 [2024-11-19 17:45:00.889762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.962 [2024-11-19 17:45:00.889794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.962 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.890084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.890116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.890239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.890260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.890444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.890475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.890729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.890760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.890976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.891009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.891203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.891235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.891382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.891414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.891726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.891758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.891991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.892025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.892227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.892259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.892492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.892523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.892774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.892795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.892984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.893005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.893172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.893203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.893699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.893722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.893937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.893971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.894144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.894163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.894385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.894416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.894565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.894596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.894836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.894867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.895044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.895079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.895328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.895348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.895660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.895692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.895890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.896182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.896223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.896482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.896503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.896794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.896815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.896981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.897002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.897173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.897193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.897408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.897428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.897688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.897965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.897998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.898136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.898168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.898412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.898444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.898690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.898723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.898988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.899023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.899221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.899253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.899441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.963 [2024-11-19 17:45:00.899472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.963 qpair failed and we were unable to recover it. 00:26:58.963 [2024-11-19 17:45:00.899732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.899752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.899902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.899922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.900082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.900116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.900319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.900352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.900618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.900658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.900908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.900930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.901047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.901068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.901181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.901202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.901396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.901417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.901632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.901652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.901872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.901905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.902033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.902066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.902285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.902317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.902495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.902533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.902801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.902833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.903085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.903120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.903298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.903331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.903613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.903778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.903799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.903994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.904016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.904255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.904276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.904394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.904415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.904657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.904987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.905020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.905304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.905336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.905656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.905688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.905961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.905996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.906187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.906219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.906339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.906371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.906613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.906634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.906801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.906822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.907077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.907110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.907306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.907339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.907611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.907840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.907861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.908049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.908070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.908240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.908285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.908552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.964 qpair failed and we were unable to recover it. 00:26:58.964 [2024-11-19 17:45:00.908841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.964 [2024-11-19 17:45:00.908883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.909045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.909067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.909290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.909321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.909525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.909557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.909757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.909800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.909961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.909982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.910136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.910259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.910280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.910525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.910557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.910852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.910884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.911161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.911196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.911398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.911419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.911583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.911614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.911903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.911934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.912194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.912226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.912480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.912512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.912650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.912683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.912862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.912894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.913126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.913160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.913384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.913416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.913719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.913751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.913960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.913994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.914264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.914296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.914444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.914476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.914663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.914704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.914861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.914882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.915158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.915180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.915269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.915290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.915466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.915487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.915742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.915774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.916085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.916119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.916366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.916399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.916518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.916550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.916833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.965 [2024-11-19 17:45:00.916866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.965 qpair failed and we were unable to recover it. 00:26:58.965 [2024-11-19 17:45:00.917065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.917098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.917357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.917389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.917590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.917621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.917882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.917903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.918148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.918170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.918344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.918365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.918523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.918544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.918725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.918756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.919062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.919260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.919299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.919523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.919555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.919687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.919708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.919965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.919986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.920149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.920170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.920432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.920718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.920761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.920878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.920911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.921100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.921134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.921277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.921308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.921421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.921453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.921639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.921858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.921890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.922017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.922051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.922248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.922281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.922475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.922507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.922751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.922772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.922882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.923089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.923124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.923370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.923404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.923590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.923623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.923883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.923904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.924014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.924037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.924241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.924273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.924410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.924443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.924665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.924697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.924890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.924910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.925152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.925177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.925277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.925297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.966 [2024-11-19 17:45:00.925472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.966 [2024-11-19 17:45:00.925504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.966 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.925722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.925754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.925960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.925994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.926193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.926226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.926438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.926470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.926679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.926701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.926883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.926903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.927105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.927140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.927401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.927434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.927566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.927599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.927720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.927757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.927994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.928016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.928128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.928150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.928265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.928286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.928405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.928426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.928616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.928650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.928934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.928977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.929173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.929205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.929432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.929475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.929643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.929664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.929777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.929809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.929991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.930024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.930212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.930245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.930435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.930456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.930649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.930681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.930817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.930856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.931045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.931079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.931192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.931225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.931432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.931465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.931661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.931694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.931816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.931837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.932083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.932105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.932208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.932229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.932392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.932413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.932687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.932720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.932843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.932876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.933086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.933121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.933333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.933365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.933511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.967 [2024-11-19 17:45:00.933543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.967 qpair failed and we were unable to recover it. 00:26:58.967 [2024-11-19 17:45:00.933770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.933791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.933943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.933972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.934214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.934258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.934451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.934484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.934752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.934786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.934901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.934921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.935176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.935198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.935354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.935374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.935523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.935543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.935717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.935737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.935854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.935875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.936031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.936053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.936283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.936315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.936420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.936451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.936647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.936680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.936859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.936880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.937028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.937050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.937133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.937154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.937330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.937351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.937502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.937522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.937629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.937649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.937886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.937918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.938115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.938148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.938323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.938355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.938531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.938563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.938737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.938767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.938985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.939018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.939168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.939206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.939449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.939481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.939618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.939649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.939899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.939930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.940065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.940097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.940351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.940383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.940633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.940665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.940771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.940803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.941053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.941087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.941280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.941312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.941490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.941522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.968 [2024-11-19 17:45:00.941760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.968 [2024-11-19 17:45:00.941781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.968 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.941975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.941996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.942239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.942259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.942480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.942501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.942726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.942747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.943014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.943057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.943298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.943331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.943534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.943565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.943700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.943720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.943977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.944011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.944305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.944336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.944520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.944552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.944841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.944862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.945101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.945123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.945385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.945406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.945649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.945669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.945872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.945895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.946159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.946182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.946441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.946462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.946711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.946897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.946918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.947100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.947122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.947286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.947318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.947507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.947538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.947804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.947836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.948077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.948110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.948307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.948338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.948602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.948622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.948900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.948921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.949152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.949192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.949444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.949476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.949771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.949802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.950088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.950123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.950264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.950297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.969 [2024-11-19 17:45:00.950565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.969 [2024-11-19 17:45:00.950597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.969 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.950718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.950751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.951004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.951039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.951311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.951343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.951587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.951619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.951742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.951762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.952002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.952024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.952266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.952287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.952518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.952541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.952785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.952809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.953068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.953089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.953196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.953386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.953406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.953673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.953694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.953962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.953984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.954083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.954102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.954267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.954287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.954531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.954578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.954758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.954789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.955074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.955107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.955249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.955280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.955498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.955810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.955831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.956048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.956070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.956221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.956242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.956429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.956449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.956624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.956644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.956892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.956923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.957121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.957155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.957407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.957438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.957707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.957739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.957942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.957983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.958210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.958242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.958431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.970 [2024-11-19 17:45:00.958463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.970 qpair failed and we were unable to recover it. 00:26:58.970 [2024-11-19 17:45:00.958710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.958741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.958932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.958972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.959233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.959253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.959477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.959497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.959669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.959689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.959937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.959983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.960278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.960311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.960561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.960581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.960824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.960844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.961040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.961075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.961317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.961348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.961535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.961567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.961814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.961845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.962040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.962072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.962338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.962370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.962663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.962694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.962890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.962922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.963231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.963264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.963401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.963433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.963725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.963745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.963859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.963890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.964197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.964230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.964488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.964519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.964814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.964846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.965123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.965157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.965358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.965389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.965644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.965676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.965974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.966008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.966264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.966296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.966491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.966531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.966708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.966728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.966942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.966992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.967192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.967224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.967471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.967503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.967689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.967720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.967917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.967958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.968096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.968127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.971 [2024-11-19 17:45:00.968317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.971 [2024-11-19 17:45:00.968349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.971 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.968538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.968570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.968849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.968869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.969113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.969135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.969359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.969381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.969613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.969634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.969796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.969820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.970072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.970094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.970216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.970237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.970427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.970447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.970752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.970772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.971045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.971067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.971259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.971280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.971485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.971506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.971794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.971825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.972101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.972135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.972403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.972443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.972659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.972680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.972925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.972955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.973121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.973143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.973396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.973431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.973644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.973677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.973933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.973978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.974192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.974224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.974490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.974523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.974723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.974754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.974937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.974997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.975264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.975296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.975474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.975506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.975752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.975773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.975999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.976021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.976305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.976338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.976535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.976567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.976838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.976886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.977074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.977096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.977227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.977264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.977517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.977549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.977767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.977788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.977986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.972 [2024-11-19 17:45:00.978008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.972 qpair failed and we were unable to recover it. 00:26:58.972 [2024-11-19 17:45:00.978114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.978134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.978364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.978385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.978559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.978580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.978843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.978875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.979069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.979101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.979300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.979332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.979482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.979515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.979699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.979731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.979928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.979970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.980105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.980137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.980382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.980414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.980690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.980722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.980969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.980992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.981109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.981129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.981332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.981365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.981564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.981596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.981886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.981918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.982145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.982178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.982372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.982404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.982666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.982687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.982842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.982862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.982975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.982997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.983177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.983355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.983376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.983577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.983609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.983823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.983855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.984036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.984070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.984289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.984340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.984646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.984679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.984978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.985012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.985287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.985599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.985631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.985896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.985935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.986166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.986187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.986439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.986471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.986661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.986692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.986882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.986914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.987133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.987167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.973 [2024-11-19 17:45:00.987443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.973 [2024-11-19 17:45:00.987475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.973 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.987739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.987771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.988077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.988112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.988426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.988731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.988763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.989029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.989064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.989279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.989311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.989528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.989769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.989803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.990068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.990089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.990334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.990355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.990507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.990529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.990695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.990716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.990989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.991024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.991252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.991286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.991488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.991521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.991706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.991835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.991856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.991962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.991984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.992103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.992124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.992278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.992300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.992398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.992419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.992538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.992559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.992720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.992742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.992995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.993036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.993285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.993318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.993498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.993531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.993644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.993665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.993916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.993959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.994144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.994177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.994464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.994498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.994776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.994808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.995067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.995090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.995264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.995287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.995488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.995510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.995689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.995722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.995926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.995970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.996174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.996207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.974 qpair failed and we were unable to recover it. 00:26:58.974 [2024-11-19 17:45:00.996427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.974 [2024-11-19 17:45:00.996460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.996568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.996607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.996873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.996913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.997049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.997083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.997384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.997417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.997684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.997716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.998004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.998038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.998256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.998288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.998507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.998539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.998747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.998769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.999021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.999055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.999317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.999348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.999628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.999661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:00.999944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:00.999978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.000175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.000197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.000376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.000397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.000679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.000700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.000963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.000985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.001186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.001208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.001463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.001496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.001708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.001729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.001974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.001997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.002236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.002257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.002501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.002523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.002776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.002808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.002990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.003023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.003207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.003241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.003381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.003414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.003689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.003722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.003872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.003915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.004097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.004120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.004296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.004317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.004483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.004516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.004700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.004731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.975 [2024-11-19 17:45:01.004928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.975 [2024-11-19 17:45:01.004978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.975 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.005248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.005290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.005563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.005597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.005852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.005884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.006205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.006353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.006386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.006637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.006675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.006902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.006934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.007144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.007177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.007412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.007445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.007578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.007738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.007770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.007967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.008002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.008142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.008174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.008281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.008313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.008565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.008597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.008872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.008904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.009183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.009205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.009406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.009426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.009619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.009641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.009759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.009780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.009884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.009905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.010089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.010111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.010223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.010243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.010358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.010379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.010495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.010527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.010712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.010744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.010943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.011004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.011199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.011219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.011419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.011453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.011702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.011723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.011895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.011915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.012138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.012172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.012390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.012423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.012639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.012672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.012868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.012900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.013055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.013077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.013232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.013252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.013418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.976 [2024-11-19 17:45:01.013439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.976 qpair failed and we were unable to recover it. 00:26:58.976 [2024-11-19 17:45:01.013542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.013573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.013768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.013800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.014006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.014055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.014356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.014389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.014655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.014687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.015006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.015040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.015292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.015324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.015636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.015668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.015915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.015958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.016255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.016287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.016535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.016567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.016907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.017196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.017229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.017505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.017536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.017876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.018085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.018107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.018328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.018361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.018539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.018572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.018841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.018873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.019166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.019188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.019359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.019380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.019549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.019570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.019793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.019827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.020000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.020253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.020285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.020519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.020833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.020876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.021070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.021092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.021212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.021233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.021508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.021540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.021679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.021711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.022009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.022042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.022252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.022285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.022520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.022552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.022757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.022790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.022980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.023007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.023237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.023257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.023487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.023508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.977 qpair failed and we were unable to recover it. 00:26:58.977 [2024-11-19 17:45:01.023699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.977 [2024-11-19 17:45:01.023720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.023972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.023994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.024244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.024266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.024444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.024464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.024689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.024710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.024963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.024984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.025233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.025254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.025522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.025755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.025787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.026089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.026367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.026399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.026686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.026719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.026995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.027016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.027273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.027294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.027472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.027493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.027733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.027754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.028056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.028091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.028372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.028404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.028536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.028568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.028765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.028786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.029014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.029048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.029228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.029259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.029512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.029543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.029841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.029862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.030069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.030094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.030339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.030361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.030555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.030577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.030805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.030836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.031086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.031119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.031368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.031400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.031672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.031705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.031908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.031928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.032179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.032200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.032401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.032421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.032599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.032620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.032872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.032904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.033169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.033202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.033449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.033470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.978 [2024-11-19 17:45:01.033708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.978 [2024-11-19 17:45:01.033729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.978 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.033932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.033972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.034144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.034164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.034318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.034340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.034576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.034610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.034791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.034822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.035088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.035123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.035400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.035421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.035652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.035674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.035919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.035940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.036215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.036235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.036461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.036496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.036808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.036943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.036989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.037197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.037220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.037404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.037427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.037555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.037577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.037834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.037868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.038055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.038090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.038305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.038339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.038480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.038514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.038734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.038768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.039047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.039071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.039182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.039204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.039502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.039536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.039659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.039919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.039961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.040221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.040244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.040374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.040407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.040696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.040732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.041012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.041035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.041207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.041230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.041406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.041440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.041648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.041681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.041875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.979 [2024-11-19 17:45:01.041898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.979 qpair failed and we were unable to recover it. 00:26:58.979 [2024-11-19 17:45:01.042140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.042164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.042352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.042374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.042648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.042681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.042939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.043002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.043166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.043201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.043484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.043518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.043681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.043724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.043963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.043988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.044140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.044333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.044367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.044594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.044630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.044909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.044944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.045167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.045202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.045391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.045425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.045701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.045736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.045938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.045970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.046149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.046172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.046352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.046375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.046614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.046655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.046911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.046964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.047129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.047163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.047370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.047393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.047580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.047616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.047876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.047911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.048161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.048197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.048365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.048399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.048673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.048707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.048990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.049026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.049176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.049210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.049447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.049480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.049771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.049806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.050102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.050126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.050259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.050282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.050408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.050430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.050722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.050747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.050963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.050987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.051164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.051199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.980 [2024-11-19 17:45:01.051413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.980 [2024-11-19 17:45:01.051447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.980 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.051682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.051717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.051917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.051941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.052165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.052188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.052371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.052393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.052593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.052616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.052798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.052821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.053005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.053030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.053194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.053217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.053399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.053426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.053526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.053549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.053745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.053780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.053981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.054016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.054220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.054254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.054462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.054496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.054706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.055018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.055042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.055227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.055251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.055382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.055404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.055652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.055687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.056006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.056152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.056175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.056390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.056423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.056749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.056783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.056972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.057008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.057221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.057257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.057413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.057448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.057650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.057685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.057814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.057849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.058108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.058132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.058317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.058352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.058657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.058693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.058984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.059021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.059231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.059266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.059471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.059504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.059708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.059743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.060035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.060076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.060345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.060368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.060498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.981 [2024-11-19 17:45:01.060522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.981 qpair failed and we were unable to recover it. 00:26:58.981 [2024-11-19 17:45:01.060721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.060744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.060936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.060981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.061240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.061479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.061513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.061791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.061825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.062077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.062113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.062262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.062295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.062575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.062609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.062829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.062851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.062962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.062986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.063229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.063252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.063415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.063439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.063710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.063744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.064024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.064049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.064186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.064220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.064499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.064534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.064823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.064857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.065060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.065096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.065255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.065290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.065408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.065443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.065579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.065614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.065870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.065904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.066058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.066314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.066337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.066499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.066522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.066760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.066994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.067020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.067298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.067322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.067504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.067528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.067691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.067715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.067966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.067990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.068178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.068213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.068494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.068528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.068673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.068708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.069007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.069032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.069216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.069241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.069422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.069445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.069614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.982 [2024-11-19 17:45:01.069734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.982 [2024-11-19 17:45:01.069776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.982 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.070044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.070081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.070291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.070326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.070532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.070565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.070869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.070903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.071113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.071148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.071329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.071352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.071595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.071618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.071848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.071871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.072048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.072072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.072319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.072353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.072642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.072677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.072893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.072927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.073136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.073171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.073439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.073474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.073746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.073790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.073969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.073994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.074153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.074176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.074286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.074309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.074490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.074514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.074639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.074663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.074853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.074887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.075123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.075159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.075442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.075478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.075633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.075668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.075787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.075821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.076098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.076144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.076233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.076261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.076449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.076485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.076690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.076726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.076917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.076965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.077237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.077261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.077499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.077522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.077687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.077710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.077887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.077921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.078065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.078243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.078276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.078478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.078512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.078705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.983 [2024-11-19 17:45:01.078739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.983 qpair failed and we were unable to recover it. 00:26:58.983 [2024-11-19 17:45:01.078863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.078886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.079150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.079175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.079413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.079436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.079542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.079565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.079752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.079786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.079989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.080026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.080157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.080192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.080312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.080346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.080543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.080791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.080826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.081010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.081052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.081142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.081165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.081325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.081348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.081529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.081553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.081665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.081688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.081868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.081899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.082014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.082039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.082246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.082270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.082381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.082404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.082508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.082532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.082760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.082783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.082967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.082991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.083151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.083175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.083293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.083327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.083622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.083657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.083796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.083830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.083974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.084009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.084306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.084340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.084468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.084502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.084645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.084678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.084887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.084920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.085136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.085159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.085393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.085426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.085571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.984 [2024-11-19 17:45:01.085605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.984 qpair failed and we were unable to recover it. 00:26:58.984 [2024-11-19 17:45:01.085867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.085901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.086028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.086062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.086185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.086217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.086416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.086450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.086648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.086960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.086984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.087158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.087181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.087356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.087389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.087598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.087631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.087889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.087912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.088182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.088289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.088312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.088543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.088565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.088685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.088708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.088901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.089125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.089161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.089294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.089328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.089618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.089652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.089780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.089813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.090005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.090041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.090239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.090262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.090510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.090717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.090751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.091030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.091066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.091255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.091294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.091407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.091430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.091676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.091699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.091821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.091843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.092918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.092941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.093128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.093152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.093251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.093445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.093468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.985 [2024-11-19 17:45:01.093577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.985 [2024-11-19 17:45:01.093612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.985 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.093727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.093762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.093889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.093923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.094209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.094232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.094488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.094512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.094693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.094716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.094877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.094900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.095150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.095174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.095293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.095316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.095407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.095430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.095533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.095556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.095875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.095902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.096140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.096163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.096336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.096358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.096633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.096668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.096849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.096881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.097081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.097116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.097312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.097335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.097514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.097547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.097736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.097769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.097883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.097917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.098063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.098097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.098276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.098298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.098429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.098462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.098726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.098759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.098902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.098935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.099182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.099206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.099378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.099400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.099572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.099605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.099752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.099786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.099970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.100005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.100221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.100244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.100448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.100471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.100568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.100590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.100764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.100796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.101009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.101044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.101325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.101358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.101496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.101530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.101746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.986 [2024-11-19 17:45:01.101785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.986 qpair failed and we were unable to recover it. 00:26:58.986 [2024-11-19 17:45:01.101972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.102007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.102209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.102242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.102460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.102494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.102715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.102748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.102986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.103030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.103205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.103228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.103420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.103453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.103746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.103780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.104067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.104111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.104288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.104311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.104478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.104501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.104668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.104707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.104982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.105019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.105214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.105246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.105499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.105769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.105791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.106030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.106052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.106289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.106312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.106482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.106505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.106698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.106721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.106908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.106931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.107058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.107080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.107266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.107288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.107591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.107625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.107838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.107871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.108084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.108119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.108260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.108301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.108593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.108626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.108769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.108804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.109069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.109105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.109333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.109357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.109559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.109581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.109838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.109861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.110063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.110086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.110265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.110287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.110410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.110432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.110627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.110660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.110885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.110918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.987 [2024-11-19 17:45:01.111261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.987 [2024-11-19 17:45:01.111297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.987 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.111550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.111585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.111806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.111841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.112123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.112148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.112257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.112280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.112612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.112647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.112849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.112884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.113139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.113175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.113438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.113461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.113629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.113652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.113849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.113872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.114000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.114024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.114148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.114171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.114262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.114284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.114526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.114550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.114783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.114805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.115074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.115099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.115259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.115282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.115419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.115453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.115711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.115744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.115889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.115923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.116049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.116072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.116228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.116250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.116428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.116473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.116787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.116822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.117004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.117237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.117262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.117448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.117471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.117672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.117694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.117961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.118088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.118111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.118402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.118436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.118651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.118685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.118799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.118832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.988 [2024-11-19 17:45:01.119027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.988 [2024-11-19 17:45:01.119062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.988 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.119204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.119237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.119445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.119468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.119670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.119693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.119969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.119993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.120163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.120186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.120375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.120410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.120706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.120740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.120963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.120999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.121283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.121306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.121435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.121458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.121720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.121742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.121923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.121967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.122116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.122149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.122406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.122440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.122637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.122671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.122927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.122982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.123243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.123278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.123483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.123506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.123609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.123632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.123872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.123894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.124048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.124073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.124255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.124283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.124464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.124487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.124677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.124699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.124977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.125001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.125163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.125186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.125363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.125385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.125521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.125554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.125745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.125780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.125917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.125958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.126157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.126191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.126504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.126527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.126762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.126784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.127084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.127121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.127314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.127348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.127621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.127962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.127997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.128117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.989 [2024-11-19 17:45:01.128151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.989 qpair failed and we were unable to recover it. 00:26:58.989 [2024-11-19 17:45:01.128408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.128431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.128564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.128587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.128841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.128865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.129121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.129146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.129327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.129350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.129460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.129483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.129746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.129779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.130025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.130061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.130266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.130300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.130558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.130591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.130800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.130840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.130976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.131176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.131210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.131383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.131580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.131614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.131797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.132117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.132152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.132352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.132374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.132552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.132576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.132829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.132864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.133070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.133095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.133357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.133391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.133598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.133633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.133836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.133880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.134117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.134142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.134327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.134529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.134563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.134838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.134873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.135140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.135176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.135300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.135334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.135463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.135496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.135688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.135721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.136021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.136057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.136289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.136324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.136601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.136636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.136856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.136891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.137088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.137124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.137324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.137357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.990 [2024-11-19 17:45:01.137493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.990 [2024-11-19 17:45:01.137526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.990 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.137659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.137693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.137983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.138018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.138143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.138177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.138328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.138362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.138583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.138615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.138823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.138858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.139068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.139092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.139347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.139370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.139611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.139634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.139837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.139860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.140089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.140113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.140297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.140320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.140527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.140551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.140815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.140855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.141099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.141134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.141273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.141307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.141593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.141616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.141869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.141892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.142091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.142116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.142349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.142372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.142563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.142598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.142884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.142918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.143141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.143164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.143354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.143388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.143588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.143621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.143923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.143968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.144183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.144218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.144423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.144457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.144743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.144778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.145039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.145074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.145257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.145281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.145457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.145492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.145648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.145682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.145877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.145924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.146111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.146136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.146229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.146274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.146410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.146445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.991 qpair failed and we were unable to recover it. 00:26:58.991 [2024-11-19 17:45:01.146650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.991 [2024-11-19 17:45:01.146684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.146821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.146855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.146980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.147023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.147228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.147262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.147450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.147485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.147627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.147661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.147969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.148170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.148194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.148433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.148456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:58.992 [2024-11-19 17:45:01.148570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.992 [2024-11-19 17:45:01.148595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:58.992 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.148775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.148801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.149040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.149066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.149162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.149187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.149413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.149449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.149728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.149762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.149970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.150006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.150148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.150182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.150317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.150363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.150471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.150494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.150665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.150965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.151001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.151197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.151232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.151346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.151370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.151563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.151597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.151880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.151914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.152131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.152174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.152360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.152384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.152505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.152529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.152715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.152739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.152845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.152872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.153061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.153086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.153281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.153315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.153442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.153477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.153591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.153625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.153781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.153816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.154024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.271 [2024-11-19 17:45:01.154062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.271 qpair failed and we were unable to recover it. 00:26:59.271 [2024-11-19 17:45:01.154252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.154276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.154456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.154491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.154620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.154655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.154847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.154882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.155082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.155119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.155306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.155329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.155517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.155551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.155757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.155791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.156067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.156105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.156305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.156339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.156566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.156601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.156881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.156923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.157117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.157141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.157240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.157263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.157492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.157515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.157693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.157717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.157964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.157989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.158164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.158187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.158368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.158401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.158534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.158568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.158681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.158719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.158921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.158965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.159165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.159198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.159429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.159463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.159695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.159895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.159929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.160137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.160159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.160329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.160352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.160466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.160499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.160776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.160809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.161003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.161038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.161292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.161326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.161447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.161479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.161679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.161713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.161915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.161967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.162163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.162346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.162369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.162538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.162571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.162760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.162795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.163000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.163036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.163233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.272 [2024-11-19 17:45:01.163256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.272 qpair failed and we were unable to recover it. 00:26:59.272 [2024-11-19 17:45:01.163416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.163439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.163543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.163565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.163760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.163783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.163887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.163910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.164094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.164118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.164294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.164317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.164438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.164461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.164649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.164683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.164892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.164926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.165132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.165166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.165360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.165383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.165655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.165677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.165983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.166019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.166295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.166328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.166481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.166515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.166719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.166753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.166931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.166983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.167183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.167206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.167375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.167398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.167693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.167936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.167980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.168112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.168135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.168305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.168329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.168510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.168702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.168725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.168963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.168987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.169160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.169184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.169385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.169419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.169684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.169718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.169942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.170006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.170226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.170269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.170516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.170538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.170819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.170841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.171077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.171113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.171309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.171344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.171635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.171668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.171869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.171903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.172188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.273 [2024-11-19 17:45:01.172223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.273 qpair failed and we were unable to recover it. 00:26:59.273 [2024-11-19 17:45:01.172491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.172514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.172619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.172641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.172897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.172931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.173238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.173273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.173471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.173494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.173627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.173650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.173901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.173924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.174161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.174184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.174422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.174446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.174549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.174579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.174672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.174693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.174880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.175089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.175125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.175310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.175344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.175645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.175679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.175941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.175986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.176144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.176179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.176321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.176344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.176542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.176576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.176776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.176809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.177075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.177111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.177409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.177443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.177709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.177743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.177962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.177999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.178254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.178276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.178534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.178574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.178774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.178809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.179008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.179044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.179326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.179360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.179520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.179543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.179798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.179832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.180111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.180147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.180427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.180450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.180610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.180632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.180759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.180782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.181067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.181091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.181370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.181396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.181653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.274 [2024-11-19 17:45:01.181688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.274 qpair failed and we were unable to recover it. 00:26:59.274 [2024-11-19 17:45:01.181992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.182028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.182308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.182341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.182645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.182679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.182939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.182987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.183242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.183265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.183548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.183570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.183825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.183858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.184010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.184046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.184299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.184344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.184521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.184545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.184719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.184753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.184962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.184997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.185182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.185205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.185386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.185420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.185647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.185847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.185881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.186153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.186176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.186425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.186448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.186735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.186768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.187038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.187074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.187263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.187297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.187575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.187598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.187765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.187787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.187977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.188000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.188254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.188277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.188458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.188480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.188656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.188678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.188911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.188945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.189209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.189234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.189419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.189442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.189670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.189692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.189917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.189939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.275 [2024-11-19 17:45:01.190177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.275 [2024-11-19 17:45:01.190199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.275 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.190459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.190809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.190843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.191052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.191099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.191356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.191379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.191568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.191590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.191753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.191776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.191973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.192010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.192313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.192346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.192534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.192568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.192715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.192749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.193029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.193064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.193350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.193385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.193526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.193560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.193698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.193732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.194051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.194232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.194255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.194426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.194448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.194673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.194706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.194981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.195015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.195222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.195245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.195423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.195446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.195657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.195691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.195905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.195941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.196079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.196101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.196297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.196330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.196472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.196507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.196723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.196758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.196894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.196927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.197195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.197229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.197443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.197476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.197710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.197744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.197966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.198001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.198160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.198183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.198379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.198417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.198558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.198591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.198859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.198892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.199133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.199168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.199304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.276 [2024-11-19 17:45:01.199337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.276 qpair failed and we were unable to recover it. 00:26:59.276 [2024-11-19 17:45:01.199611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.199646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.199918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.199978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.200189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.200223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.200382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.200417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.200574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.200607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.200858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.200893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.201123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.201157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.201369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.201404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.201565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.201588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.201762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.201785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.201972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.202009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.202224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.202258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.202466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.202499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.202763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.202785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.202909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.202931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.203126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.203149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.203273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.203295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.203477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.203500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.203755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.203779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.203935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.203968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.204153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.204390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.204424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.204650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.204690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.204978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.205019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.205201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.205223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.205311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.205335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.205519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.205541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.205660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.205684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.205856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.205898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.206197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.206233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.206450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.206484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.206672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.206705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.206970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.207006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.207225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.207250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.207413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.207437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.207628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.207663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.208006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.208201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.208224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.277 [2024-11-19 17:45:01.208411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.277 [2024-11-19 17:45:01.208434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.277 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.208615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.208639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.208772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.208807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.209055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.209091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.209404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.209568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.209687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.209711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.209874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.209896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.210055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.210091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.210373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.210407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.210706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.210740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.210924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.210988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.211199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.211233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.211436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.211472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.211706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.211742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.212011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.212048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.212309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.212343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.212522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.212546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.212663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.212687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.212883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.212906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.213163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.213187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.213357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.213380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.213561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.213583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.213814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.213838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.214008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.214045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.214325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.214404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.214586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.214625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.214860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.214896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.215111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.215369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.215404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.215607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.215641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.215915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.215960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.216172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.216208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.216401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.216435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.216672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.216711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.216930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.216978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.217190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.217225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.217412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.217448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.217654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.278 [2024-11-19 17:45:01.217694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.278 qpair failed and we were unable to recover it. 00:26:59.278 [2024-11-19 17:45:01.217963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.218000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.218269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.218305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.218588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.218622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.218842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.218878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.219156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.219192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.219337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.219362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.219551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.219585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.219723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.219757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.220044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.220080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.220361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.220385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.220590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.220612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.220774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.220798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.220925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.220972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.221193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.221228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.221486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.221521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.221726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.221760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.221960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.222004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.222283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.222308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.222514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.222549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.222748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.222783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.223037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.223074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.223224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.223248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.223363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.223386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.223648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.223690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.223830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.223864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.224072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.224108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.224396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.224673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.224696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.224926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.224989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.225223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.225259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.225470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.225505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.225639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.225663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.225911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.225945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.226241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.226277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.226466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.226501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.226691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.226725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.226914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.226962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.227170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.227193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.279 qpair failed and we were unable to recover it. 00:26:59.279 [2024-11-19 17:45:01.227302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.279 [2024-11-19 17:45:01.227325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.227455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.227479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.227725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.227806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.228026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.228067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.228262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.228298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.228502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.228540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.228730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.228765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.228902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.228937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.229078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.229114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.229225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.229260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.229465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.229572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.229595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.229712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.229735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.229916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.229962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.230154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.230189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.230326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.230360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.230504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.230527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.230761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.230795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.230922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.230965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.231121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.231154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.231356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.231391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.231509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.231543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.231676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.231709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.231843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.231877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.231994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.232030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.232231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.232266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.232476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.232516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.232705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.232740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.232973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.233009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.233216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.233245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.233372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.233395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.233514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.233558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.233698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.233733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.233944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.233990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.234133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.280 [2024-11-19 17:45:01.234167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.280 qpair failed and we were unable to recover it. 00:26:59.280 [2024-11-19 17:45:01.234357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.234392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.234507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.234531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.234764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.234799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.234922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.234968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.235092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.235127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.235261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.235295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.235573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.235596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.235830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.235853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.236096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.236245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.236268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.236445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.236481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.236612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.236647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.236852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.236888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.237103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.237139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.237345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.237380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.237580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.237615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.237875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.237909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.238206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.238242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.238349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.238384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.238570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.238604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.238887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.238923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.239045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.239085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.239349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.239384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.239538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.239572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.239713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.239748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.239925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.239974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.240178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.240213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.240494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.240519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.240615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.240638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.240893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.240926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.241128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.241163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.241429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.241461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.241675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.241709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.241819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.241852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.242075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.242110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.242379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.242413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.242616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.242638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.242817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.242839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.281 qpair failed and we were unable to recover it. 00:26:59.281 [2024-11-19 17:45:01.243069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.281 [2024-11-19 17:45:01.243092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.243277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.243310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.243433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.243466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.243658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.243691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.243832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.243864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.244060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.244095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.244279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.244312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.244455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.244478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.244662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.244695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.244893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.244927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.245140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.245180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.245364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.245386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.245569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.245602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.245801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.245832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.246039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.246075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.246210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.246232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.246323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.246344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.246536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.246569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.246696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.246730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.246981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.247015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.247206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.247229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.247439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.247462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.247561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.247584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.247698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.247720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.247981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.248015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.248214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.248236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.248420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.248452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.248667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.248700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.248841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.248875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.249127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.249161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.249369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.249403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.249549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.249582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.249779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.249813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.250018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.250052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.250248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.250282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.250482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.250516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.250632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.250665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.250868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.250901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.282 [2024-11-19 17:45:01.251102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.282 [2024-11-19 17:45:01.251137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.282 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.251336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.251369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.251630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.251663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.251965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.252000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.252139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.252172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.252360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.252393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.252669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.252703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.252812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.252845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.252993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.253028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.253287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.253321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.253603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.253626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.253746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.253767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.254065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.254352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.254375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.254570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.254593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.254760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.254782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.255020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.255057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.255245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.255268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.255445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.255479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.255667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.255702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.256009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.256044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.256306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.256340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.256586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.256620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.256891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.256926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.257065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.257099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.257325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.257359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.257566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.257589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.257698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.257720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.257825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.257848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.258075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.258098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.258285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.258319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.258521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.258556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.258755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.258789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.259043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.259078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.259312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.259335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.259515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.259538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.259697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.259720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.259903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.259938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.283 [2024-11-19 17:45:01.260087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.283 [2024-11-19 17:45:01.260122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.283 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.260265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.260312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.260427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.260468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.260661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.260696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.260945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.260991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.261176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.261212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.261350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.261372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.261486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.261508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.261683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.261716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.261929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.261976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.262159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.262191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.262304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.262326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.262500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.262534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.262669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.262702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.262822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.262855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.263050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.263086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.263287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.263321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.263516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.263557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.263716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.263740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.263844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.263867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.264922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.265134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.265167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.265340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.265362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.265494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.265534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.265720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.265753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.265933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.265978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.266190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.266224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.266402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.266436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.266616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.266641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.284 qpair failed and we were unable to recover it. 00:26:59.284 [2024-11-19 17:45:01.266722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.284 [2024-11-19 17:45:01.266747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.266871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.266894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.267070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.267105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.267308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.267341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.267537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.267570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.267692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.267715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.267895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.267917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.268013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.268036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.268298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.268531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.268564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.268748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.268781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.268983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.269019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.269219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.269253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.269393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.269428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.270803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.270848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.271061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.271088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.273989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.274033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.274202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.274227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.274412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.274436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.274610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.274633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.274727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.274769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.274971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.275017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.275251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.275291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.275386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.275408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.275661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.275695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.275923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.275972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.276116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.276150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.276338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.276372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.276652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.276686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.277222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.277259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.277532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.277554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.277748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.277771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.277878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.277901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.278117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.278141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.278283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.278318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.278467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.278500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.279544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.279592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.279778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.285 [2024-11-19 17:45:01.279803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.285 qpair failed and we were unable to recover it. 00:26:59.285 [2024-11-19 17:45:01.280060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.280098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.280304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.280508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.280530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.280741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.280764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.280935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.280970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.281090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.281113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.281338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.281361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.281621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.281644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.281892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.281915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.282097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.282120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.282321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.282343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.282525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.282547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.282808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.282830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.283079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.283114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.283319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.283354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.283583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.283617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.283755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.283778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.284020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.284044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.284176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.284199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.284425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.284447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.284702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.284741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.284943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.284991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.285186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.285220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.285368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.285394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.285559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.285594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.285903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.286086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.286122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.286253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.286276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.286469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.286503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.286720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.286754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.287029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.287065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.287220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.287243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.287419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.287443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.287686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.287709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.287876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.287900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.288097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.288120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.288302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.288325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.288442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.286 [2024-11-19 17:45:01.288466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-11-19 17:45:01.288633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.288656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.288743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.288766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.288861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.288883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.289022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.289046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.289139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.289162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.289342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.289375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.289514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.289548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.289773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.289806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.289933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.289978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.290107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.290140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.290257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.290290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.290437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.290471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.290624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.290663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.290797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.290831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.290961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.290997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.291118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.291151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.291265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.291298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.291523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.291557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.291756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.291778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.291965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.291989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.292144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.292167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.292289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.292397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.292432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.292570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.292604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.292718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.292751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.292876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.292910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.293160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.293240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.293527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.293605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.293762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.293800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.294013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.294053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.294242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.294276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.294488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.294522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.294676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.294699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.294788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.294810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.294900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.294922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.295055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.295080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.295198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.295219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.295313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.295336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-11-19 17:45:01.295444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.287 [2024-11-19 17:45:01.295466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.295669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.295715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.295919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.295967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.296162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.296197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.296399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.296424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.296539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.296573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.296759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.296792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.296909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.296943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.297145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.297179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.297297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.297320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.297520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.297740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.297859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.297893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.298174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.298211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.298344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.298470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.298492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.298618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.298641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.298740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.298763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.298853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.298876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.299035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.299060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.299218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.299241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.299419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.299453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.299569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.299602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.299862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.299896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.300109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.300146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.300326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.300350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.300436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.300459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.300622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.300646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.300807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.300847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.301033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.301071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.301283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.301317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.301489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.301513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.301676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.301710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.301897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.301930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.302154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.302190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.302373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.302409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.302534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.302557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.302813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.302836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.303004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.288 [2024-11-19 17:45:01.303028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-11-19 17:45:01.303122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.303145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.303302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.303324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.303417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.303441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.303552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.303758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.303792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.303921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.303964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.304159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.304192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.304382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.304415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.304544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.304700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.304733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.304920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.304960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.305159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.305193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.305324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.305348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.305443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.305465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.305642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.305664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.305771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.305805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.306085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.306119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.306355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.306389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.306517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.306551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.306757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.306790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.306989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.307026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.307211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.307245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.307502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.307535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.307709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.307732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.307837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.307860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.308907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.308929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.309163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.309186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.309274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.309296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.289 qpair failed and we were unable to recover it. 00:26:59.289 [2024-11-19 17:45:01.309384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.289 [2024-11-19 17:45:01.309406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.309487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.309510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.309602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.309623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.309711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.309733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.309818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.309840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.310105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.310149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.310289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.310323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.310513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.310545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.310726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.310759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.310980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.311017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.311223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.311258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.311446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.311479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.311611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.311634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.311715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.311737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.311887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.311909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.312077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.312100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.312275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.312297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.312461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.312483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.312597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.312620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.312870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.312903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.313050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.313084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.313191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.313225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.313339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.313373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.313488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.313527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.313812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.313835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.313977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.314186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.314220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.314331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.314364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.314567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.314600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.314853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.314886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.315032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.315066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.315180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.315213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.315397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.315429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.315634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.315669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.315924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.315971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.316171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.316203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.316450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.316472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.316637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.316672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.290 [2024-11-19 17:45:01.316863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.290 [2024-11-19 17:45:01.316897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.290 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.317088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.317124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.317249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.317294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.317466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.317495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.317615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.317638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.317727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.317749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.317903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.318127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.318149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.318333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.318355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.318508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.318529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.318624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.318646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.318754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.318777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.318928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.318964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.319133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.319156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.319313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.319345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.319468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.319501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.319609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.319642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.319823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.319855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.320043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.320078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.320201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.320233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.320364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.320396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.320645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.320667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.320760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.320782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.320953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.320976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.321135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.321156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.321268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.321289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.321469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.321492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.321647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.321679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.321901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.321935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.322139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.322171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.322348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.322380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.322626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.322648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.322745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.322967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.322990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.323221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.323242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.323427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.323460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.323587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.323620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.323896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.323929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.291 qpair failed and we were unable to recover it. 00:26:59.291 [2024-11-19 17:45:01.324139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.291 [2024-11-19 17:45:01.324173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.324286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.324319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.324498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.324532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.324716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.324738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.324847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.324890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.325018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.325054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.325321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.325354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.325535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.325557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.325651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.325692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.325805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.325839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.326087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.326123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.326233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.326265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.326445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.326477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.326675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.326708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.326849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.326870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.327048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.327072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.327294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.327316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.327472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.327493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.327669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.327701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.327896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.327929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.328206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.328241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.328487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.328510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.328686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.328709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.328888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.328910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.329015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.329037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.329267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.329290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.329504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.329527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.329781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.329816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.330012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.330048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.330238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.330270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.330535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.330568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.330811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.330842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.331048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.331082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.331400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.331432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.331665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.331697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.331923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.331964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.332233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.332267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.332489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.332522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.292 [2024-11-19 17:45:01.332762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.292 [2024-11-19 17:45:01.332784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.292 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.333005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.333027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.333129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.333150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.333343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.333377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.333632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.333671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.333971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.334006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.334185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.334218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.334364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.334396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.334609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.334642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.334937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.334984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.335110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.335142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.335254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.335287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.335502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.335535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.335648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.335681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.335963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.335998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.336265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.336298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.336574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.336607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.336896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.336928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.337161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.337195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.337414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.337447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.337691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.337724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.338045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.338080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.338379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.338412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.338699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.338721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.338907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.339025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.339059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.339304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.339338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.339516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.339549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.339844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.339877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.340078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.340112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.340372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.340405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.340580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.340877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.340899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.341123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.341146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.341318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.341352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.341610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.341632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.341799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.341820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.342055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.342079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.293 qpair failed and we were unable to recover it. 00:26:59.293 [2024-11-19 17:45:01.342279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.293 [2024-11-19 17:45:01.342300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.342594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.342779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.342801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.342974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.342997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.343232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.343254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.343443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.343465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.343592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.343613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.343776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.343798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.344021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.344045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.344236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.344258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.344508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.344530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.344690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.344712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.344889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.344910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.345161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.345183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.345350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.345372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.345546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.345568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.345842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.345875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.346145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.346181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.346414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.346436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.346562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.346584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.346761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.346880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.346902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.347096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.347119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.347243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.347264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.347509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.347530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.347734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.347756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.347976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.347998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.348174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.348196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.348375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.348408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.348704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.348737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.348929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.348973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.349104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.349138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.349253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.349286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.349581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.349613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.349798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.294 [2024-11-19 17:45:01.349820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.294 qpair failed and we were unable to recover it. 00:26:59.294 [2024-11-19 17:45:01.349915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.349937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.350219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.350254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.350458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.350490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.350739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.350772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.351084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.351120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.351367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.351400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.351701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.351723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.351924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.352096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.352129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.352353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.352386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.352596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.352629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.352897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.352919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.353117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.353140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.353396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.353418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.353641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.353674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.353822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.353855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.354131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.354168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.354320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.354353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.354547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.354580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.354834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.354857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.354980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.355003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.355175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.355197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.355391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.355424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.355656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.355689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.355883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.355916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.356113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.356147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.356353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.356391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.356651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.356685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.356932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.356974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.357160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.357192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.357470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.357501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.357757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.357779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.357933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.357973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.358091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.358113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.358209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.358231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.358397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.358419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.358708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.358741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.358856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.358888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.295 qpair failed and we were unable to recover it. 00:26:59.295 [2024-11-19 17:45:01.359048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.295 [2024-11-19 17:45:01.359086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.359336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.359369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.359554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.359587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.359724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.359757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.360004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.360028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.360287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.360482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.360504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.360713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.360734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.360995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.361018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.361193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.361215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.361409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.361442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.361563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.361596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.361776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.361809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.362009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.362033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.362229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.362262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.362470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.362510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.362709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.362742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.362963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.362987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.363158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.363181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.363412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.363446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.363682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.363704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.363966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.364002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.364190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.364223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.364477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.364511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.364732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.364765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.364944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.364977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.365155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.365188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.365437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.365469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.365608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.365648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.365911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.366156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.366181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.366358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.366382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.366659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.366681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.366925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.366957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.367184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.367207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.367376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.367398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.367598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.367631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.367848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.367882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.368026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.296 [2024-11-19 17:45:01.368061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.296 qpair failed and we were unable to recover it. 00:26:59.296 [2024-11-19 17:45:01.368219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.368253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.368456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.368490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.368744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.368778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.369046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.369088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.369293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.369327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.369480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.369514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.369800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.369835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.370034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.370068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.370287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.370321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.370526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.370549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.370777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.370810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.370938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.370983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.371101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.371134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.371366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.371400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.371705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.371891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.371925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.372151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.372186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.372465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.372499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.372783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.372818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.373077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.373101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.373268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.373291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.373568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.373590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.373850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.373873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.373988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.374013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.374201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.374223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.374332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.374355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.374496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.374529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.374719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.374754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.374965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.375000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.375208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.375243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.375363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.375396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.375696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.375729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.375927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.375971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.376179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.376419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.376452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.376691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.376724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.376979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.377015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.377219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.297 [2024-11-19 17:45:01.377253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.297 qpair failed and we were unable to recover it. 00:26:59.297 [2024-11-19 17:45:01.377441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.377477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.377765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.377806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.378075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.378118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.378372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.378406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.378534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.378568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.378795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.378828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.379052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.379088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.379371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.379405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.379612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.379753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.379786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.380043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.380078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.380215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.380248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.380522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.380565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.380722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.380746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.380910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.380932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.381059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.381082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.381181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.381204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.381399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.381422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.381667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.381701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.381978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.382014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.382173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.382208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.382365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.382398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.382710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.382742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.382928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.382962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.383058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.383082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.383313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.383335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.383517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.383541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.383770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.383793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.383894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.383927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.384165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.384200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.384399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.384433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.384669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.384704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.384966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.384990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.385167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.385198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.298 [2024-11-19 17:45:01.385431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.298 [2024-11-19 17:45:01.385465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.298 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.385790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.385825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.385940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.385987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.386244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.386278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.386586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.386620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.386869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.386904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.387136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.387171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.387403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.387437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.387653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.387687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.387844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.387879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.388107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.388132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.388244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.388268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.388493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.388672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.388696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.388896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.388930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.389171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.389207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.389409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.389445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.389756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.389800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.389974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.389998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.390101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.390124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.390372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.390394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.390499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.390522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.390697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.390720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.390962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.390999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.391141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.391174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.391387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.391421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.391643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.391682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.391895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.391929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.392223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.392258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.392471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.392505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.392699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.392722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.392816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.392839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.392971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.392995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.393158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.393180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.393283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.393305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.393485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.393508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.393631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.393654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.393767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.393790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.393971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.299 [2024-11-19 17:45:01.393996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.299 qpair failed and we were unable to recover it. 00:26:59.299 [2024-11-19 17:45:01.394107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.394131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.394294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.394318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.394508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.394542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.394818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.394851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.394984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.395141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.395266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.395461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.395573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.395716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.395849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.395890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.396102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.396138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.396279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.396312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.396513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.396548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.396822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.396855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.397092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.397117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.397207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.397231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.397404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.397426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.397591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.397625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.397816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.397851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.398110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.398146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.398337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.398372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.398494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.398528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.398743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.398777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.398907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.398931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.399174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.399198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.399287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.399310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.399471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.399494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.399631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.399666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.399871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.399905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.400177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.400213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.400421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.400456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.400602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.400625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.400884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.400918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.401074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.401109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.401366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.401401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.401556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.401591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.401728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.401762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.300 [2024-11-19 17:45:01.402047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.300 [2024-11-19 17:45:01.402084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.300 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.402217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.402253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.402383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.402417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.402540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.402574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.402710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.402745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.403006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.403031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.403153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.403175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.403368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.403403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.403525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.403560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.403680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.403714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.403893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.403927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.404072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.404108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.404296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.404330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.404471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.404505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.404701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.404725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.404925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.404969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.405175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.405210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.405327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.405366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.405560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.405593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.405711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.405744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.406000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.406036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.406231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.406264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.406505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.406539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.406742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.406775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.407000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.407036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.407238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.407273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.407529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.407562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.408099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.408134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.408357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.408555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.408588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.408789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.408823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.409081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.409105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.409214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.409236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.409355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.409377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.409557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.409579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.409765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.409798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.409997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.410032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.410154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.301 [2024-11-19 17:45:01.410189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.301 qpair failed and we were unable to recover it. 00:26:59.301 [2024-11-19 17:45:01.410462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.410496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.410631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.410654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.410766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.410788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.410891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.410913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.411042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.411065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.411181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.411207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.411492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.411515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.411634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.411657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.411850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.411874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.412056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.412079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.412260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.412282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.412371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.412393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.412649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.412671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.412849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.412871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.412965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.412989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.413128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.413150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.413331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.413353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.413441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.413465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.413557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.413579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.413694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.413717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.413908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.413929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.414054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.414078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.414182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.414204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.414368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.414390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.414569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.414591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.414849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.414882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.415020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.415056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.415265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.415551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.415584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.415723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.415756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.415878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.415900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.416009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.416035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.416205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.416232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.416389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.416411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.416664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.416822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.416856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.417046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.417082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.417283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.417318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.302 [2024-11-19 17:45:01.417508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.302 [2024-11-19 17:45:01.417541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.302 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.417739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.417773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.418074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.418109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.418317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.418350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.418560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.418594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.418792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.418825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.418985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.419019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.419145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.419178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.419365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.419444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.419615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.419654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.419860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.419895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.420057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.420095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.420217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.420243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.420498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.420521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.420628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.420651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.420752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.420776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.420936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.420966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.421145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.421168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.421335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.421358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.421562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.421596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.421728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.421762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.421964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.421989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.422168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.422190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.422302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.422438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.422461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.422624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.422646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.422753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.422776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.422981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.423005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.423174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.423197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.423378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.423401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.423505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.423527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.423683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.423706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.423881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.423915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.424068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.424104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.424218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.303 [2024-11-19 17:45:01.424251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-11-19 17:45:01.424500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.424579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.424861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.424899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.425210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.425248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.425454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.425489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.425671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.425705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.425842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.425875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.426063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.426304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.426337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.426471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.426504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.426757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.426791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.426980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.427015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.427228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.427260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.427454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.427487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.427619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.427661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.427792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.427824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.428002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.428037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.428278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.428313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.428449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.428482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.428619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.428653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.428835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.428868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.429050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.429294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.429328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.429528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.429561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.429812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.429846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.430103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.430146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.430426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.430461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.430664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.430699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.430847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.430966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.430990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.431095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.431118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.431215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.431237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.431330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.431353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.431458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.431480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.431670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.431693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.431891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.432109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.432132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.432363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.432396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.432691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.304 [2024-11-19 17:45:01.432725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-11-19 17:45:01.432858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.432891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.433112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.433147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.433288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.433328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.433521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.433555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.433753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.433776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.434028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.434052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.434165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.434188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.434410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.434443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.434740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.434773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.434988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.435011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.435181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.435213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.435419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.435453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.435656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.435689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.435866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.435888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.436045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.436068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.436251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.436284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.436483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.436517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.436773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.436807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.437048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.437083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.437240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.437275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.437461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.437498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.437781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.437805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.437986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.438010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.438180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.438203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.438400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.438423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.438624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.438646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.438843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.438876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.439055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.439090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.439299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.439334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.439583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.439616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.439897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.439932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.440239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.440275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.440462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.440497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.440738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.440773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.440934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.440979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.441098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.441131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.441295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.441329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.441471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.441504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-11-19 17:45:01.441734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.305 [2024-11-19 17:45:01.441768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.442067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.442091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.442211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.442236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.442342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.442363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.442464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.442486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.442751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.442774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.442969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.442993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.443120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.443142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.443325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.443347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.443470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.443493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.443752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.443788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.444107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.444144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.444275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.444309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.444585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.444621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.444823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.444846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.445041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.445066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.445176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.445199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.445319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.445341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.445517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.445541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.445708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.445732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.445899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.445922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.446032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.446056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.446175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.446197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.446301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.446323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.446431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.446454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.446671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.446878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.447009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.447032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.447151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.447174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.447280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.447304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.447412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.447434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.447653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.447687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.449038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.449086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.449351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.449375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.449550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.449572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.449830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.449865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.450031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.450067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.450270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.450303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.306 [2024-11-19 17:45:01.450444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.306 [2024-11-19 17:45:01.450479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.306 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.450733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.450766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.451018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.451044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.451169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.451191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.451389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.451413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.451540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.451563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.451722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.451746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.451861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.451883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.452186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.452210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.452408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.452430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.452554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.452578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.452754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.452779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.453036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.453059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.453182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.453204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.453333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.453357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.453475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.453498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.453693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.453715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.453981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.454020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.454290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.454315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.454428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.454452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.454557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.454582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.454809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.454837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.454933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.454968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.455147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.455171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.455273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.455296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.455416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.455442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.455616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.455640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.455831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.455856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.455971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.455996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.456201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.456225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.456333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.456357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.456575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.456608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.456921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.456966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.457102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.457124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.457286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.307 [2024-11-19 17:45:01.457309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.307 qpair failed and we were unable to recover it. 00:26:59.307 [2024-11-19 17:45:01.457513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.457548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.457684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.457718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.457988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.458025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.458232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.458257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.458369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.458392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.458563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.458598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.458808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.458842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.459053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.459093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.459222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.459245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.459363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.459385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.459497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.459521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.459762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.459796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.459992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.460029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.460256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.460297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.460485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.460519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.460773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.460809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.461104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.461128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.461311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.461336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.461470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.461494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.461745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.461781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.461927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.461998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.462152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.462185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.462315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.462339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.462457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.462480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.462606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.462629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.462790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.462812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.462913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.462937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.463182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.463206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.463336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.463360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.463464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.463684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.463708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.463887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.463911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.464048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.464073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.464182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.464205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.464301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.464324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.464416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.464438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.464528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.464552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.308 qpair failed and we were unable to recover it. 00:26:59.308 [2024-11-19 17:45:01.464745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.308 [2024-11-19 17:45:01.464777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.464917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.464963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.465101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.465135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.465262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.465298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.465436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.465472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.465626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.465661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.465793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.465818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.465993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.466017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.466126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.466152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.466259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.466283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.466460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.466484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.466675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.466711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.466929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.467117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.467151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.467289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.467324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.467468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.467502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.467634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.467876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.467918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.468140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.468164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.468266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.468288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.468465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.468488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.468647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.468669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.468790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.468823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.468966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.469137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.469303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.469417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.469525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.469658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.469866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.470015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.470040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.470212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.470237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.309 qpair failed and we were unable to recover it. 00:26:59.309 [2024-11-19 17:45:01.470333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.309 [2024-11-19 17:45:01.470356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.470536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.470570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.470758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.470792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.470937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.470985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.471174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.471303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.471527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.471551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.471660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.471788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.471810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.471909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.471932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.472172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.472195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.472288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.472312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.472483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.472511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.472626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.472662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.472852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.472888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.473054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.473090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.473225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.473259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.473379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.473414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.473626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.473660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.473799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.473823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.473939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.473987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.474193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.474320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.474355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.474478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.474658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.474692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.474895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.474929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.475148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.475183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.475324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.475347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.475509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.475531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.475629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.593 qpair failed and we were unable to recover it. 00:26:59.593 [2024-11-19 17:45:01.475757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.593 [2024-11-19 17:45:01.475781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.475891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.475915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.476882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.476906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.477086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.477113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.477224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.477421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.477443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.477601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.477625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.477783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.477807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.478034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.478059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.478228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.478251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.478356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.478389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.478576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.478612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.478807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.478842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.478973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.479008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.479215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.479250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.479460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.479493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.479693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.479727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.479876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.479910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.480059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.480096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.480279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.480303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.480486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.480510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.480610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.480633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.480757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.480779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.480944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.480975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.481167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.481190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.481347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.481368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.481484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.481507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.481759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.481782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.481993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.482030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.482222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.482255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.482393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.482426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.482619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.594 [2024-11-19 17:45:01.482652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.594 qpair failed and we were unable to recover it. 00:26:59.594 [2024-11-19 17:45:01.482839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.482872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.483062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.483086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.483172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.483193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.483378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.483400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.483558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.483581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.483708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.483742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.483920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.483963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.484081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.484114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.484314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.484351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.484484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.484517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.484794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.484838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.485021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.485044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.485176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.485199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.485392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.485426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.485551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.485584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.485770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.485804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.486082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.486117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.486301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.486334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.486534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.486819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.486852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.487056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.487079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.487252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.487273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.487393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.487426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.487618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.487652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.487845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.487877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.488051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.488074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.488162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.488185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.488433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.488477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.488685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.488719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.488914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.488959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.489180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.489203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.489357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.489378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.489560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.489581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.489784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.489817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.490037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.490072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.490266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.490288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.490460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.490493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.490617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.490650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.595 [2024-11-19 17:45:01.490848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.595 [2024-11-19 17:45:01.490881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.595 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.491020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.491047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.491221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.491244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.491507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.491530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.491630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.491652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.491749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.491771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.491945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.491977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.492899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.492921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.493104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.493138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.493326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.493359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.493562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.493594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.493791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.493825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.494024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.494061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.494189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.494352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.494385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.494517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.494551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.494661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.494695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.494819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.494852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.495032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.495068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.495275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.495309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.495431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.495482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.495677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.495712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.495908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.495958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.496169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.496191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.496295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.496539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.496572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.496754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.496786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.496916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.496961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.497146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.497179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.497308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.497341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.497524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.497557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.497837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.497879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.498068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.596 [2024-11-19 17:45:01.498092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-11-19 17:45:01.498274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.498306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.498566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.498599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.498739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.498773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.498962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.498985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.499236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.499269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.499454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.499488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.499694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.499728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.499874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.499907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.500121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.500157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.500276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.500298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.500449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.500471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.500693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.500715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.500879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.500901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.501004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.501028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.501186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.501209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.501373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.501406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.501605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.501639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.501872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.501907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.502049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.502085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.502344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.502377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.502518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.502552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.502732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.502765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.502992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.503015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.503205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.503227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.503393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.503415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.503573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.503606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.503800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.503834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.504073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.504097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.504288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.504311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.504483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.504504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.504705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-11-19 17:45:01.505017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.597 [2024-11-19 17:45:01.505053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.505251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.505273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.505509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.505543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.505803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.505837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.506103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.506272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.506296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.506428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.506450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.506563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.506585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.506745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.506784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.506984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.507021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.507235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.507268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.507470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.507503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.507683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.507717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.507970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.507994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.508195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.508217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.508335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.508358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.508589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.508621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.508830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.508864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.509145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.509181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.509452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.509487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.509757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.509789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.510006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.510030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.510209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.510231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.510360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.510392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.510590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.510624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.510751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.510785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.510992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.511019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.511287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.511309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.511473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.511496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.511620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.511655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.511867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.511901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.512216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.512437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.512471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.512683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.512716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.512898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.512919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.513038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.513061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.513178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.513200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.513382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.513405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.598 [2024-11-19 17:45:01.513572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.598 [2024-11-19 17:45:01.513606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.598 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.513823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.513858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.514066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.514103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.514309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.514342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.514541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.514574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.514871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.514905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.515164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.515187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.515434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.515457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.515683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.515716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.515939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.515987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.516198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.516221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.516386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.516410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.516616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.516650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.516789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.516824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.517037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.517073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.517190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.517219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.517350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.517373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.517506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.517528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.517845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.518071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.518237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.518270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.518424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.518459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.518704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.518738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.518921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.518965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.519092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.519124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.519326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.519349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.519603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.519625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.519859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.519894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.520150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.520186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.520399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.520423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.520550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.520573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.520753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.520924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.520956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.521078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.521100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.521226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.521248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.521378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.521400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.521504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.521526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.521733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.521757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.599 [2024-11-19 17:45:01.521934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.599 [2024-11-19 17:45:01.521978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.599 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.522095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.522118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.522209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.522232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.522343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.522366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.522485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.522512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.522752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.522775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.522894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.522916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.523059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.523084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.523213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.523235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.523410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.523433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.523706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.523738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.523941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.523990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.524140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.524174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.524301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.524325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.524493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.524515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.524790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.524813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.524913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.524936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.525034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.525057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.525319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.525342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.525438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.525633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.525655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.525922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.525968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.526154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.526187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.526399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.526434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.526570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.526605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.526891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.526925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.527262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.527285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.527401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.527425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.527708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.527741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.527963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.528000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.528211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.528247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.528393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.528427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.528667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.528701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.528896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.528919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.529062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.529086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.529265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.529288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.529411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.529433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.529672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.529694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.529862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.600 [2024-11-19 17:45:01.529896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.600 qpair failed and we were unable to recover it. 00:26:59.600 [2024-11-19 17:45:01.530059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.530095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.530264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.530299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.530508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.530542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.530842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.530876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.531095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.531132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.531333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.531366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.531674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.531708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.531912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.531959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.532104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.532137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.532350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.532384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.532528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.532563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.532762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.532796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.532915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.532962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.533196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.533219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.533417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.533451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.533739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.533773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.533976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.534001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.534178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.534202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.534315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.534339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.534568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.534590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.534782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.534816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.534970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.535006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.535133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.535167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.535296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.535336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.535437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.535460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.535696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.535729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.535888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.535912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.536038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.536063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.536162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.536185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.536406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.536429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.536595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.536618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.536791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.536825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.536972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.537008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.537314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.537354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.537551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.537586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.537724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.537759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.538017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.538054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.538174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.538209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.601 [2024-11-19 17:45:01.538334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.601 [2024-11-19 17:45:01.538356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.601 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.538456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.538479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.538678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.538700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.538815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.538838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.538959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.538983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.539285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.539319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.539436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.539471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.539603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.539638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.539853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.539900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.540085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.540109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.540269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.540292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.540419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.540443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.540555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.540577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.540689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.540713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.540969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.540993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.541180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.541203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.541387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.541420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.541634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.541668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.541797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.541831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.542932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.542975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.543140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.543163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.543343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.543366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.543607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.543630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.543743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.543766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.543867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.543890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.543998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.544023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.544127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.544150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.544311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.544334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.602 [2024-11-19 17:45:01.544567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.602 [2024-11-19 17:45:01.544591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.602 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.544680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.544704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.544795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.544817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.544986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.545175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.545323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.545508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.545624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.545812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.545944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.545978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.546099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.546123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.546279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.546303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.546473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.546497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.546727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.546750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.546908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.546942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.547059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.547098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.547265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.547287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.547404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.547438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.547548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.547583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.547776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.547812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.548001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.548046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.548143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.548166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.548260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.548285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.548540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.548564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.548723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.548747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.548926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.548961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.549962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.549986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.550165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.550189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.550319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.550342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.550453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.550475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.550560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.550583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.550696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.550719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.550887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.603 [2024-11-19 17:45:01.550910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.603 qpair failed and we were unable to recover it. 00:26:59.603 [2024-11-19 17:45:01.551029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.551053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.551235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.551269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.551467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.551499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.551640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.551673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.551867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.552205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.552240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.552439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.552471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.552606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.552640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.552776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.552809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.553025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.553060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.553240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.553262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.553441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.553474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.553587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.553619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.553824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.553857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.553978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.554014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.554144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.554177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.554329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.554363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.554584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.554623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.554824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.554857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.555071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.555095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.555209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.555231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.555337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.555360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.555536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.555570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.555687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.555720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.555848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.555882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.556007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.556042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.556231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.556254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.556419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.556453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.556628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.556662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.556797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.556840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.556927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.556957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.557895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.604 [2024-11-19 17:45:01.557919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.604 qpair failed and we were unable to recover it. 00:26:59.604 [2024-11-19 17:45:01.558145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.558171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.558332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.558354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.558465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.558498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.558616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.558648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.558787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.558819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.559024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.559061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.559256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.559283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.559446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.559468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.559575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.559598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.559759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.559782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.559891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.559913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.560937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.560967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.561052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.561074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.561170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.561191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.561362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.561407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.561539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.561572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.561691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.561724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.561835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.561870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.562054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.562089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.562213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.562247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.562427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.562450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.562549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.562588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.562782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.562816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.562995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.563163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.563312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.563442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.563651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.563779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.563910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.563932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.564036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.564058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.564308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.564342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.605 [2024-11-19 17:45:01.564522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.605 [2024-11-19 17:45:01.564555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.605 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.564678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.564711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.564893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.564925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.565072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.565095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.565196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.565217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.565320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.565342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.565445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.565468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.565763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.565786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.565999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.566022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.566194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.566217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.566394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.566416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.566666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.566699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.567002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.567039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.567191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.567236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.567462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.567485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.567603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.567624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.567866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.567888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.568101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.568125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.568258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.568290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.568541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.568574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.568846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.568879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.569189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.569365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.569387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.569631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.569653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.569793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.569815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.569941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.569972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.570134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.570156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.570388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.570411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.570533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.570555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.570745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.570778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.571037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.571072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.571278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.571311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.571440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.571473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.571786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.571821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.572005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.572041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.606 [2024-11-19 17:45:01.572262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.606 [2024-11-19 17:45:01.572295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.606 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.572500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.572523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.572749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.572772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.573003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.573025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.573151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.573174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.573353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.573393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.573652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.573686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.573824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.573857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.573987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.574023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.574227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.574249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.574410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.574444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.574666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.574699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.574897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.574931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.575131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.575154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.575327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.575349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.575587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.575621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.575743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.575776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.576074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.576110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.576390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.576424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.576691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.576723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.576973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.576997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.577198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.577220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.577409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.577432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.577570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.577593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.577776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.577799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.577970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.577993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.578169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.578192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.578372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.578406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.578569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.578609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.578817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.578851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.579125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.579160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.579307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.579340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.579547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.579582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.579870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.579903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.580114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.580149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.580292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.580315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.580486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.580508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.580770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.607 [2024-11-19 17:45:01.580793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.607 qpair failed and we were unable to recover it. 00:26:59.607 [2024-11-19 17:45:01.580918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.581097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.581120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.581356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.581389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.581546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.581580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.581888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.581929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.582175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.582198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.582368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.582638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.582671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.582815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.582848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.583073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.583108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.583362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.583386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.583494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.583517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.583707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.583742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.584017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.584053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.584276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.584421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.584444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.584688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.584723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.584861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.584901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.585136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.585178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.585356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.585379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.585591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.585624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.585900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.585934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.586101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.586136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.586282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.586315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.586524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.586564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.586673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.586696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.586875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.586898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.587153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.587178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.587364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.587387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.587585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.587608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.587852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.587886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.588121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.588157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.588441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.588816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.589099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.589136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.589334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.589368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.589620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.589644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.589858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.589882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.608 [2024-11-19 17:45:01.590041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.608 [2024-11-19 17:45:01.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.608 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.590237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.590260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.590372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.590395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.590513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.590537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.590709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.590733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.590996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.591021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.591194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.591217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.591460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.591495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.591721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.591754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.591963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.591999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.592231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.592254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.592456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.592479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.592778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.592813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.592993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.593030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.593180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.593215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.593533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.593557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.593831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.593876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.594072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.594107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.594251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.594286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.594495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.594528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.594844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.594879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.595096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.595133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.595338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.595373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.595533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.595568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.595776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.595810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.596078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.596114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.596340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.596363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.596478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.596501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.596596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.596618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.596795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.596840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.596962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.596998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.597203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.597238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.597418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.597441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.597667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.597703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.597916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.597962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.598106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.598140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.598350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.598386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.598548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.598593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.609 [2024-11-19 17:45:01.598777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.609 [2024-11-19 17:45:01.598801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.609 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.599000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.599026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.599157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.599180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.599328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.599362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.599639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.599674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.599967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.600002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.600202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.600237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.600376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.600400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.600662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.600698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.600909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.600976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.601180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.601214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.601377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.601411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.601545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.601579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.601836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.601869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.602009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.602034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.602209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.602232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.602442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.602476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.602734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.602769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.602970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.603006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.603198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.603233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.603419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.603442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.603702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.603736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.604002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.604038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.604186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.604219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.604475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.604510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.604769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.604804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.605030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.605065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.605268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.605302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.605444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.605467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.605687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.605721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.605931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.605980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.606125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.606148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.606276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.606299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.606478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.606500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.606620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.606644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.606764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.606787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.606892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.606919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.607114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.607139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.607266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.610 [2024-11-19 17:45:01.607288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.610 qpair failed and we were unable to recover it. 00:26:59.610 [2024-11-19 17:45:01.607399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.607423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.607515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.607538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.607722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.607756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.607888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.607923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.608069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.608104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.608290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.608313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.608400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.608423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.608584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.608608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.608705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.608728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.608853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.608876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.609919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.609942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.610890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.610987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.611016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.611179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.611202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.611370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.611393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.611508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.611542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.611690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.611725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.611982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.612019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.612155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.612178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.612276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.612299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.612461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.611 [2024-11-19 17:45:01.612484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.611 qpair failed and we were unable to recover it. 00:26:59.611 [2024-11-19 17:45:01.612640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.612664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.612777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.612800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.612900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.612924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.613902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.613925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.614057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.614082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.614211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.614234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.614343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.614365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.614456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.614479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.614572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.614595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.614826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.614850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.615088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.615113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.615279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.615303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.615403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.615426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.615526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.615550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.615778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.615802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.615898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.615922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.616887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.616987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.617012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.617184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.617209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.617399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.617435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.617499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b7af0 (9): Bad file descriptor 00:26:59.612 [2024-11-19 17:45:01.617988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.618067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.618301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.618340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.618550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.618586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.618728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.618763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.618981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.619019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.612 [2024-11-19 17:45:01.619226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.612 [2024-11-19 17:45:01.619262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.612 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.619405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.619439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.619569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.619604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.619817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.619852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.620051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.620087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.620220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.620256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.620445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.620480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.620739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.620773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.620981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.621018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.621140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.621175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.621291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.621323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.621489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.621523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.621646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.621682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.621812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.621846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.622076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.622111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.622229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.622262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.622467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.622506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.622769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.622802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.622935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.622993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.623085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.623108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.623194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.623216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.623328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.623354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.623559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.623592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.623781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.623814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.623944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.623993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.624123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.624146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.624364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.624396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.624584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.624618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.624809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.624843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.624992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.625167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.625426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.625616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.625748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.625890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.625912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.626048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.626071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.626171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.626193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.626299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.613 [2024-11-19 17:45:01.626322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.613 qpair failed and we were unable to recover it. 00:26:59.613 [2024-11-19 17:45:01.626421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.626443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.626531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.626553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.626707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.626729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.626829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.626851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.627028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.627052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.627154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.627177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.627352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.627385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.627508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.627541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.627698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.627934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.627978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.628191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.628224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.628425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.628458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.628608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.628630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.628719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.628843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.628866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.629041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.629065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.629307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.629340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.629454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.629489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.629690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.629723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.629876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.630005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.630039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.630181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.630214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.630345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.630378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.630621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.630700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.630855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.630894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.631131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.631168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.631301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.631336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.631457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.631491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.631607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.631640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.631841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.631875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.632029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.632063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.632181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.632215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.632408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.632440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.632562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.632596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.632740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.632773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.632967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.633003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.633141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.633187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.633446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.633479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.614 qpair failed and we were unable to recover it. 00:26:59.614 [2024-11-19 17:45:01.633598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.614 [2024-11-19 17:45:01.633632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.633813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.633847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.634050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.634275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.634309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.634443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.634477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.634603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.634636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.634772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.634806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.635005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.635040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.635239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.635273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.635463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.635497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.635701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.635737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.635850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.635883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.636032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.636066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.636192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.636224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.636357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.636391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.636514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.636547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.636685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.636718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.636906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.636939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.637210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.637244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.637362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.637395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.637524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.637558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.637774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.637808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.638004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.638039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.638162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.638194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.638323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.638356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.638617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.638698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.638837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.638876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.639874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.639906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.640053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.640088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.640198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.640231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.640347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.640369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.640459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.640481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.615 qpair failed and we were unable to recover it. 00:26:59.615 [2024-11-19 17:45:01.640652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.615 [2024-11-19 17:45:01.640675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.640807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.640829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.640999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.641036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.641232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.641266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.641377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.641409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.641636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.641669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.641852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.641886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.642091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.642124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.642317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.642352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.642469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.642491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.642647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.642687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.642815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.642848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.642971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.643006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.643193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.643225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.643412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.643438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.643596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.643629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.643828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.643861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.644049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.644083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.644287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.644458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.644491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.644605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.644637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.644761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.644794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.644979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.645014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.645208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.645242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.645358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.645391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.645522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.645555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.645694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.645727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.645925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.645969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.646096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.646120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.646279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.646301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.646418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.646441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.646615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.646637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.646910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.646933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.647130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.647154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.616 [2024-11-19 17:45:01.647279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.616 [2024-11-19 17:45:01.647311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.616 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.647495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.647528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.647806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.647840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.647977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.648012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.648194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.648227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.648481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.648515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.648741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.648774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.648907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.648966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.649216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.649250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.649453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.649486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.649810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.649844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.650068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.650103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.650227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.650261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.650464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.650497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.650718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.650751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.651040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.651080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.651264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.651287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.651468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.651490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.651703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.651725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.651884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.651907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.652093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.652117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.652224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.652247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.652441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.652465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.652707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.652731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.652923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.652946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.653135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.653159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.653327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.653352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.653574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.653607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.653799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.653833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.653987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.654022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.654227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.654249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.654432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.654466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.654760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.654793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.655009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.655043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.655168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.655213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.655332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.655354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.655464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.655487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.655750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.655791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.617 [2024-11-19 17:45:01.655927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.617 [2024-11-19 17:45:01.655969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.617 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.656150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.656183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.656301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.656337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.656520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.656553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.656833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.656867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.657019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.657053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.657269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.657303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.657452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.657486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.657705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.657728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.657981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.658017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.658222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.658441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.658463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.658774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.658796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.658976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.658999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.659185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.659219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.659514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.659549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.659765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.659799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.660015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.660050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.660183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.660216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.660438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.660471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.660585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.660619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.660895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.660930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.661089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.661124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.661332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.661366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.661638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.661661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.661887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.661909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.662037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.662061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.662202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.662225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.662448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.662472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.662729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.662752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.662915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.662938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.663097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.663120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.663235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.663256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.663458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.663481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.663634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.663657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.663887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.663921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.664128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.664164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.664361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.664387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.664581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-11-19 17:45:01.664604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.618 qpair failed and we were unable to recover it. 00:26:59.618 [2024-11-19 17:45:01.664769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.664792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.665077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.665101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.665244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.665267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.665438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.665482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.665755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.665788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.665998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.666033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.666230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.666264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.666451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.666485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.666756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.666779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.666960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.666995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.667205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.667239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.667444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.667478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.667697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.667730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.668042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.668078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.668264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.668287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.668407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.668430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.668613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.668635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.668728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.668751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.669048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.669085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.669211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.669246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.669448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.669497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.669660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.669683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.669862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.669896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.670044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.670079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.670226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.670268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.670380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.670406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.670510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.670534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.670709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.670732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.670835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.670868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.671075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.671109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.671314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.671546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.671569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.671729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.671752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.672005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.672029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.672210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.672233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.672345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.672369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.672488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.672512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.672703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.672737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.619 [2024-11-19 17:45:01.673005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-11-19 17:45:01.673041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.619 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.673233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.673379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.673412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.673661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.673696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.673904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.673939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.674137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.674171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.674352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.674386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.674644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.674688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.674850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.674872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.675062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.675089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.675276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.675309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.675587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.675621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.675879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.675914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.676083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.676119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.676309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.676336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.676595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.676628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.676774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.676809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.676995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.677030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.677190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.677225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.677346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.677381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.677539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.677572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.677725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.677759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.677893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.677927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.678093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.678129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.678271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.678305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.678436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.678470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.678650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.678673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.678796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.678831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.679031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.679068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.679197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.679231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.679381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.679415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.679527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.679562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.679749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.679783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.679985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.680022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.680151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-11-19 17:45:01.680187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.620 qpair failed and we were unable to recover it. 00:26:59.620 [2024-11-19 17:45:01.680304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.680337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.680539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.680572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.680705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.680739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.680873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.680907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.681129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.681164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.681364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.681399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.681523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.681671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.681693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.681787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.681810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.681903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.681926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.682114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.682195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.682365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.682403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.682544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.682581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.682736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.682771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.682898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.682933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.683160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.683195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.683409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.683444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.683571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.683605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.683736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.683771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.683916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.683943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.684156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.684181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.684303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.684337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.684473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.684508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.684651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.684873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.684907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.685920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.685944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.686143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.686167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.686271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.686293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.686495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.686517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.686681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.686704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.686924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.686957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.687058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.687081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.621 [2024-11-19 17:45:01.687257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.621 [2024-11-19 17:45:01.687292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.621 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.687481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.687515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.687714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.687749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.687866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.687900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.688177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.688213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.688357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.688392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.688582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.688616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.688824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.688859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.688998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.689035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.689241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.689266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.689375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.689398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.689504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.689527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.689708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.689741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.690011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.690048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.690240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.690273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.690389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.690430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.690654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.690678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.690860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.690884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.690982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.691096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.691208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.691340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.691466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.691591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.691770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.691815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.692007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.692043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.692228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.692263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.692450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.692483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.692604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.692637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.692752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.692787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.692913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.692946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.693186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.693220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.693407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.693441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.693566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.693589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.693766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.693799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.693922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.693968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.694166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.694206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.694330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.694361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.694550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.622 [2024-11-19 17:45:01.694572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.622 qpair failed and we were unable to recover it. 00:26:59.622 [2024-11-19 17:45:01.694678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.694701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.694796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.694819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.695003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.695039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.695165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.695197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.695319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.695352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.695538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.695569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.695863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.695896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.696042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.696077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.696207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.696240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.696439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.696472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.696665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.696687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.696778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.696816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.696962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.696998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.697183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.697217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.697333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.697366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.697558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.697590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.697726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.697759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.697903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.697935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.698074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.698107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.698305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.698337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.698471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.698503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.698609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.698642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.698820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.698855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.699057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.699093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.699216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.699242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.699336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.699360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.699543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.699565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.699716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.699738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.699828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.699868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.700073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.700108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.700237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.700269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.700469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.700502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.700659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.700693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.700804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.700837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.700945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.701000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.701189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.701222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.701354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.701387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.701502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.701524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.623 [2024-11-19 17:45:01.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.623 [2024-11-19 17:45:01.701781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.623 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.701963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.701987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.702104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.702126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.702328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.702484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.702505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.702662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.702684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.702778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.702800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.702976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.702999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.703945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.703987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.704164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.704186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.704290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.704312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.704420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.704442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.704605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.704638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.704820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.704855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.704993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.705028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.705218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.705251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.705392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.705427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.705535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.705568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.705755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.705789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.706065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.706099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.706285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.706318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.706514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.706547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.706797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.706829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.706960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.706995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.707194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.707226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.707349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.707382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.707508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.707542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.707741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.707773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.707905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.707938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.708167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.708200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.708401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.708434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.708548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.708580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.708703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.624 [2024-11-19 17:45:01.708736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.624 qpair failed and we were unable to recover it. 00:26:59.624 [2024-11-19 17:45:01.708863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.708897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.709094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.709128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.709315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.709348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.709563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.709598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.709719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.709740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.709903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.709925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.710188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.710232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.710360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.710393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.710584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.710616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.710800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.710822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.710936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.710968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.711193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.711214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.711328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.711351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.711591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.711624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.711758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.711789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.711969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.712078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.712109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.712305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.712346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.712433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.712455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.712606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.712629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.712748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.712770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.712858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.712879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.713888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.713910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.714022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.714045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.714147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.714169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.714259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.714282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.714520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.714555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.714761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.625 [2024-11-19 17:45:01.714795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.625 qpair failed and we were unable to recover it. 00:26:59.625 [2024-11-19 17:45:01.714959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.714994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.715192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.715226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.715346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.715378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.715557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.715602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.715787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.715809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.715984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.716019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.716145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.716178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.716293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.716326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.716511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.716549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.716748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.716782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.716984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.717020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.717151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.717183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.717400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.717433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.717649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.717682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.717956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.717980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.718160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.718343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.718377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.718648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.718681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.718976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.719011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.719205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.719238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.719432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.719466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.719721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.719743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.719992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.720015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.720131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.720165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.720359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.720392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.720616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.720649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.720823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.720845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.721035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.721059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.721230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.721252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.721475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.721496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.721594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.721814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.721847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.722124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.722159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.722433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.722466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.722779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.722812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.723010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.723051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.723239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.723271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.723520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.723553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.626 qpair failed and we were unable to recover it. 00:26:59.626 [2024-11-19 17:45:01.723802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.626 [2024-11-19 17:45:01.723825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.723919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.723941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.724140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.724174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.724431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.724464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.724748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.724779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.724983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.725018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.725212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.725246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.725455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.725478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.725596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.725628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.725829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.725863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.726100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.726135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.726291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.726325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.726504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.726538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.726776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.726810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.727070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.727104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.727237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.727270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.727543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.727577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.727708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.727741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.728022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.728044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.728221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.728243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.728397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.728419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.728538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.728560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.728733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.728755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.728933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.729006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.729239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.729272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.729480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.729515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.729768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.729801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.730121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.730157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.730303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.730337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.730478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.730511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.730729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.730878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.730900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.731014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.731037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.731207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.731241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.731433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.731466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.731666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.731700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.731954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.731978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.732086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.732109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.732284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.627 [2024-11-19 17:45:01.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.627 qpair failed and we were unable to recover it. 00:26:59.627 [2024-11-19 17:45:01.732489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.732511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.732704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.732726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.732892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.733148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.733171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.733299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.733321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.733420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.733444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.733635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.733668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.733786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.733819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.733944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.733991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.734138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.734172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.734339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.734488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.734521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.734642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.734664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.734929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.734977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.735110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.735143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.735292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.735328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.735544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.735567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.735664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.735686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.735793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.735816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.736005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.736029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.736157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.736178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.736335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.736358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.736480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.736502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.737844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.737891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.738118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.738144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.738325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.738347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.738523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.738550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.738661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.738683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.738843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.738865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.739073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.739108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.739382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.739415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.739671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.739703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.739898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.739921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.740122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.740145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.740256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.740280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.740410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.740432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.740551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.740574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.740703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.740726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.628 qpair failed and we were unable to recover it. 00:26:59.628 [2024-11-19 17:45:01.740970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.628 [2024-11-19 17:45:01.741007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.741144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.741178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.741324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.741358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.741620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.741644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.741874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.741897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.742153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.742179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.742303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.742326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.742503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.742526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.742821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.742845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.742960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.742984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.743163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.743185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.743304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.743327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.743518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.743542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.743731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.743764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.744018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.744053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.744551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.744594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.744788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.744811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.744922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.745101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.745124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.745240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.745262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.745431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.745454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.745707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.745730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.745968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.745991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.746214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.746237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.746359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.746381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.746507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.746528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.746705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.746727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.746899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.746921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.747088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.747111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.747245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.747268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.747377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.747399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.747583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.747606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.747787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.747810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.747986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.748011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.629 [2024-11-19 17:45:01.748115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.629 [2024-11-19 17:45:01.748137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.629 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.748238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.748261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.748395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.748417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.748638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.748661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.748913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.748935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.749090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.749113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.749311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.749334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.749472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.749586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.749613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.749825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.749847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.750132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.750156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.750334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.750357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.750478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.750500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.750766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.750789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.750979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.751002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.751127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.751168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.751306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.751329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.751453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.751668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.751691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.751811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.751834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.752028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.752185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.752396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.752418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.752589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.752613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.752721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.752744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.752926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.752974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.753095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.753117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.753238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.753261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.753429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.753451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.753688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.753711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.753972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.753996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.754116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.754139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.754319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.754341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.754467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.754490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.754713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.754735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.754993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.755017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.755195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.755217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.755471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.755495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.630 [2024-11-19 17:45:01.755729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.630 [2024-11-19 17:45:01.755751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.630 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.755928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.755963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.756204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.756227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.756410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.756434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.756613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.756636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.756806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.756829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.757073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.757098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.757225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.757248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.757458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.757482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.757689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.757712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.757837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.757860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.758089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.758117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.758325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.758347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.758458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.758480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.758671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.758695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.758882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.758905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.759103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.759125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.759257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.759280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.759400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.759423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.759677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.759700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.759989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.760014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.760198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.760220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.760327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.760350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.760530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.760553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.760721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.760744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.760985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.761011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.761143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.761165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.761275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.761298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.761420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.761443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.761697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.761720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.761890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.761914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.762093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.762117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.762327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.762425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.762451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.762716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.762739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.762931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.762966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.763092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.763116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.763294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.763317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.631 [2024-11-19 17:45:01.763504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.631 [2024-11-19 17:45:01.763532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.631 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.763734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.763756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.763945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.763977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.764162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.764319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.764342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.764447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.764470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.764663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.764686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.764890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.764912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.765162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.765184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.765369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.765391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.765517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.765540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.765723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.765746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.765977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.766002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.766112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.766135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.766328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.766351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.766485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.766508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.766687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.766709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.766896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.766919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.767123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.767150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.767329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.767352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.767479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.767682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.767704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.767990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.768015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.768192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.768215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.768378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.768401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.768566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.768588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.768693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.768717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.768922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.768975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.769159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.769183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.769316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.769450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.769473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.769679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.769701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.769815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.769839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.769969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.769994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.770110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.770132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.770320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.770343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.770547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.770571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.770693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.770717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.770840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.632 [2024-11-19 17:45:01.770863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.632 qpair failed and we were unable to recover it. 00:26:59.632 [2024-11-19 17:45:01.771059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.771084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.771264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.771286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.771414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.771436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.771813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.771837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.772081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.772105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.772236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.772259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.772508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.772532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.772754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.772776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.772962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.772985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.773153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.773175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.773421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.773444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.773629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.773651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.773812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.773834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.774014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.774038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.774208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.774231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.774496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.774742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.774766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.774939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.774972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.775152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.775176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.775338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.775362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.775492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.775515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.775725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.775748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.775946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.775979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.776084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.776108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.776283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.776306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.776494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.776516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.776710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.776734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.776966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.776990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.777104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.777128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.777239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.777261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.777473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.777496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.777632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.777655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.777815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.777838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.778018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.778042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.778213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.778237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.778361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.778385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.778592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.778615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.778850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.633 [2024-11-19 17:45:01.779068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.633 [2024-11-19 17:45:01.779092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.633 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.779208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.779232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.779365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.779387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.779610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.779633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.779865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.779889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.780063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.780088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.780201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.780224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.780352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.780376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.780545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.780567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.780754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.780776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.781047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.781073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.781188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.781210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.781382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.781405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.781492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.781515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.781764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.781789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.781920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.781943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.782173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.782196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.782323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.782345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.782471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.782499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.782620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.782642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.782942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.782980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.783164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.783187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.783390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.783412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.783549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.783572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.783756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.783783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.783965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.783989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.784130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.784155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.784261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.784285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.784402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.784426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.784549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.784572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.784790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.784813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.785094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.785118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.785257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.785281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.785391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.785414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.785539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.785563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.785757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.785781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.785911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.634 [2024-11-19 17:45:01.785934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.634 qpair failed and we were unable to recover it. 00:26:59.634 [2024-11-19 17:45:01.786049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.786072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.786198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.786221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.786348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.786371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.786491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.786513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.786783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.786805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.787044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.787069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.787157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.787179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.787308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.787331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.787441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.787467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.787727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.787749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.787907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.787929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.788127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.788151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.788281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.788303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.788397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.788418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.788569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.788602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.635 [2024-11-19 17:45:01.788810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.635 [2024-11-19 17:45:01.788845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.635 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.789075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.789114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.789304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.789339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.789481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.789504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.789826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.789849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.790103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.790126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.790246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.790268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.790382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.790404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.790595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.790793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.790816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.919 [2024-11-19 17:45:01.791007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.919 [2024-11-19 17:45:01.791030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.919 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.791165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.791188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.791346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.791369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.791487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.791509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.791671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.791865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.791887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.791997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.792020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.792122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.792145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.792252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.792274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.792360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.792382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.792508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.792534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.792694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.792716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.793002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.793189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.793213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.793394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.793417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.793532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.793555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.793784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.793807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.794038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.794062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.794164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.794187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.794348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.794372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.794461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.794484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.794769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.794792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.794957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.794981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.795096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.795118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.795298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.795321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.795581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.795604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.795842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.795864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.796063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.796086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.796258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.796280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.796539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.796562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.796746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.796769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.797008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.797031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.797204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.797226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.797383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.797407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.797662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.797684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.797788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.797810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.797999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.798023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.798182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.798205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.798398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.920 [2024-11-19 17:45:01.798420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.920 qpair failed and we were unable to recover it. 00:26:59.920 [2024-11-19 17:45:01.798687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.798708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.798892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.798915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.799072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.799095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.799210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.799233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.799408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.799430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.799556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.799578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.799863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.799886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.800102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.800125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.800252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.800275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.800526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.800549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.800720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.800742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.800995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.801019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.801180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.801223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.801397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.801419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.801593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.801615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.801716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.801739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.801922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.801944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.802124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.802147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.802325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.802347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.802449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.802471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.802641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.802664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.802823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.802846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.803017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.803040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.803165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.803188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.803439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.803463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.803689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.803711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.803966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.803990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.804168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.804191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.804370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.804392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.804644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.804667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.804895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.804917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.805049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.805073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.805240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.805262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.805394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.805418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.805666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.805689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.805872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.805895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.806188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.806212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.806458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.921 [2024-11-19 17:45:01.806480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.921 qpair failed and we were unable to recover it. 00:26:59.921 [2024-11-19 17:45:01.806719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.806741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.806898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.806926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.807158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.807182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.807344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.807367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.807573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.807595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.807859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.807881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.808007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.808031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.808210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.808233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.808358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.808381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.808545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.808567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.808766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.808789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.809074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.809098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.809207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.809230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.809386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.809408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.809702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.809724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.809943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.809988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.810220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.810243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.810437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.810459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.810687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.810708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.810964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.810987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.811253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.811276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.811389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.811411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.811572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.811594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.811756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.811779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.812040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.812063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.812238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.812261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.812439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.812462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.812570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.812593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.812751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.812778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.813053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.813075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.813168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.813190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.813439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.813462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.813707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.813728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.813897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.813919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.814192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.814397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.814419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.814693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.814714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.814974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.922 [2024-11-19 17:45:01.814998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.922 qpair failed and we were unable to recover it. 00:26:59.922 [2024-11-19 17:45:01.815189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.815212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.815470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.815492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.815659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.815699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.815987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.816011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.816197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.816220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.816382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.816405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.816590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.816612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.816867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.816889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.817060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.817084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.817334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.817356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.817536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.817559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.817735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.817758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.817931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.817962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.818195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.818218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.818447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.818470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.818707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.818728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.818981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.819004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.819266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.819289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.819576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.819599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.819706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.819729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.819924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.819946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.820066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.820089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.820249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.820271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.820523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.820545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.820722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.820744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.820849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.820872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.821045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.821069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.821317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.821340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.821432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.821454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.821579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.821601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.821716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.821738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.821976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.822000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.822185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.822208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.822314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.822336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.822466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.822488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.822701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.822723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.822904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.822927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.823103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.923 [2024-11-19 17:45:01.823127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.923 qpair failed and we were unable to recover it. 00:26:59.923 [2024-11-19 17:45:01.823299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.823320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.823499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.823521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.823698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.823977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.824001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.824256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.824278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.824380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.824402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.824507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.824529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.824618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.824640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.824748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.824770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.825005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.825029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.825222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.825245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.825504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.825525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.825782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.825803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.825898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.825921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.826056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.826079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.826311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.826333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.826443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.826465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.826573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.826595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.826859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.826882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.827125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.827149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.827259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.827288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.827393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.827416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.827548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.827570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.827691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.827714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.827971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.827994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.828154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.828176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.828348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.828371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.828458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.828480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.828657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.828679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.828886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.828999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.829137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.829160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.829322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.924 [2024-11-19 17:45:01.829345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.924 qpair failed and we were unable to recover it. 00:26:59.924 [2024-11-19 17:45:01.829519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.829542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.829727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.829750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.829925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.829957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.830076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.830100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.830378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.830400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.830487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.830509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.830692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.830714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.830816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.830838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.831022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.831046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.831221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.831244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.831473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.831495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.831606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.831628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.831757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.831779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.831973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.831997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.832165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.832295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.832317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.832493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.832515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.832639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.832662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.832775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.832799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.832898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.832921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.833203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.833227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.833356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.833379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.833576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.833599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.833787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.833811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.833940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.833974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.834966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.834989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.835188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.835210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.835371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.835393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.835510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.835532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.835755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.835836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.836125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.836166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.836363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.925 [2024-11-19 17:45:01.836388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.925 qpair failed and we were unable to recover it. 00:26:59.925 [2024-11-19 17:45:01.836518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.836540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.836707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.836729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.836844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.836866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.837064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.837093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.837266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.837289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.837398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.837421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.837653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.837676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.837914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.837937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.838058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.838081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.838171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.838194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.838350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.838372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.838540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.838562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.838670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.838692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.838897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.838920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.839041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.839064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.839301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.839324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.839494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.839517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.839685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.839708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.839824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.839847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.839984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.840008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.840177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.840200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.840433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.840457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.840547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.840570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.840751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.840774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.840886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.840909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.841098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.841120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.841314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.841336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.841450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.841473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.841584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.841607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.841706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.841729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.841917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.841940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.842072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.842095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.842265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.842287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.842488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.842510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.842617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.842640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.842802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.842824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.842931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.842964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.843088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.843124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.926 [2024-11-19 17:45:01.843373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.926 [2024-11-19 17:45:01.843396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.926 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.843517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.843539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.843772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.843796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.843901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.843924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.844027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.844050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.844211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.844233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.844392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.844416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.844575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.844597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.844760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.844785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.844884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.844906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.845089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.845112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.845245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.845267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.845359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.845382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.845497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.845519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.845630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.845653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.845907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.845930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.846929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.846966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.847080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.847104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.847226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.847249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.847420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.847441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.847672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.847696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.847896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.847920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.848052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.848077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.848173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.848196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.848443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.848465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.848669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.848779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.848807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.848915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.848938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.849113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.849137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.849247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.849269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.849475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.927 [2024-11-19 17:45:01.849566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.927 [2024-11-19 17:45:01.849589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.927 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.849776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.849799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.849963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.849987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.850276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.850300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.850472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.850494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.850668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.850691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.850797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.850821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.850929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.850962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.851139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.851163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.851260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.851284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.851388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.851412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.851510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.851534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.851691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.851714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.851878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.851901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.852074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.852097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.852277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.852540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.852563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.852758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.852782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.852901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.852924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.853149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.853188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.853391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.853426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.853691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.853724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.853980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.854025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.854155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.854189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.854395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.854429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.854631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.854655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.854863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.854886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.854991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.855015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.855167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.855189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.855300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.855323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.855488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.855510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.855601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.855624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.855860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.855883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.855985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.928 [2024-11-19 17:45:01.856008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.928 qpair failed and we were unable to recover it. 00:26:59.928 [2024-11-19 17:45:01.856177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.856199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.856369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.856391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.856512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.856534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.856710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.856731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.856841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.856864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.856980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.857232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.857434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.857551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.857687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.857814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.857938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.857971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.858195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.858217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.858386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.858409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.858632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.858655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.858741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.858768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.858862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.858884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.858990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.859106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.859306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.859464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.859640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.859753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.859883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.859905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.860001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.860025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.860189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.860211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.860377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.860399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.860567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.860590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.860750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.860772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.860868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.860891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.861919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.861942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.862121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.862144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.929 qpair failed and we were unable to recover it. 00:26:59.929 [2024-11-19 17:45:01.862304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.929 [2024-11-19 17:45:01.862327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.862489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.862511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.862666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.862690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.862791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.862813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.862901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.862924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.863914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.863936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.864059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.864199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.864311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.864436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.864553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.864670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.864919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.865022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.865228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.865265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.865481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.865515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.865698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.865731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.865926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.865974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.866174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.866208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.866312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.866338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.866566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.866589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.866701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.866724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.866895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.866918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.867019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.867043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.867270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.867292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.867378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.867400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.867495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.867518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.867621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.867643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.867756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.867779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.868003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.868027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.868196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.868219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.868377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.868400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.868579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.868602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.930 [2024-11-19 17:45:01.868686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.930 [2024-11-19 17:45:01.868708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.930 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.868801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.868823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.869021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.869044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.869171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.869195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.869279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.869301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.869406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.869428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.869516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.869538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.869780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.869817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.870917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.870941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.871146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.871170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.871341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.871363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.871446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.871467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.871692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.871715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.871898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.871919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.872111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.872134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.872297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.872319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.872414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.872436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.872528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.872551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.872705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.872729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.872813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.872834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.873973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.873997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.874102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.874129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.874221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.874242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.874429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.874451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.874556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.874578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.874738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.931 [2024-11-19 17:45:01.874759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.931 qpair failed and we were unable to recover it. 00:26:59.931 [2024-11-19 17:45:01.874911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.875028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.875050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.875291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.875313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3615069 Killed "${NVMF_APP[@]}" "$@" 00:26:59.932 [2024-11-19 17:45:01.875480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.875502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.875595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.875617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.875732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.875755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.875837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.875859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.876020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.876042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:59.932 [2024-11-19 17:45:01.876218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.876242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.876464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.876486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:59.932 [2024-11-19 17:45:01.876605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.876629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.876710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.876731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.876837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.876862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.932 [2024-11-19 17:45:01.877047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.877071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.932 [2024-11-19 17:45:01.877248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.877270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.877458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.877480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.932 [2024-11-19 17:45:01.877707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.877729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.877839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.877862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.878019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.878042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.878221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.878243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.878438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.878461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.878583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.878605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.878731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.878840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.878863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.879924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.879954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.880116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.880137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.880233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.880255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.880430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.880456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.880634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.880656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.932 [2024-11-19 17:45:01.880829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.932 [2024-11-19 17:45:01.880852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.932 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.881010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.881035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.881191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.881210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.881378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.881399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.881502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.881523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.881682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.881706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.881873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.881895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.882148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.882170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.882415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.882437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.882551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.882573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.882815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.882837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.882938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.882971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.883068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.883090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.883313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.883336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.883506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.883527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.883764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.883788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.883881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.883903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.884060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.884084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.884319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.884343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3615938 00:26:59.933 [2024-11-19 17:45:01.884434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.884457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.884675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3615938 00:26:59.933 [2024-11-19 17:45:01.884698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:59.933 [2024-11-19 17:45:01.884925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.884956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.885061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.885088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 wit 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3615938 ']' 00:26:59.933 h addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.885200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.885227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.885331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.885356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.933 [2024-11-19 17:45:01.885451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.885473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.933 [2024-11-19 17:45:01.885705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.885727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.885837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.885858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.933 [2024-11-19 17:45:01.886032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.886057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 [2024-11-19 17:45:01.886154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.886176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.933 qpair failed and we were unable to recover it. 00:26:59.933 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.933 [2024-11-19 17:45:01.886291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.933 [2024-11-19 17:45:01.886314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.886489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.886512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 17:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.934 [2024-11-19 17:45:01.886604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.886627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.886713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.886735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.886912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.886934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.887962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.887985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.888155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.888177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.888288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.888310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.888395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.888422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.888607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.888629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.888782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.888803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.888965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.888988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.889165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.889189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.889302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.889323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.889433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.889455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.889624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.889647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.889749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.889770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.889858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.889880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.890101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.890124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.890228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.890250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.890410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.890432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.890595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.890617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.890725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.890746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.890856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.890878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.891904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.891925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.892098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.892121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.934 qpair failed and we were unable to recover it. 00:26:59.934 [2024-11-19 17:45:01.892272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.934 [2024-11-19 17:45:01.892293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.892385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.892406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.892525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.892547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.892743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.892765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.892866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.892887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.892986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.893010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.893254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.893276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.893386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.893408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.893571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.893592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.893755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.893778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.894118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.894146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.894241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.894264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.894514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.894538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.894622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.894644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.894731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.894752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.894851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.894873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.895020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.895043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.895155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.895176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.895349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.895370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.895469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.895491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.895586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.895612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.895844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.895865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.896962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.896985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.897147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.897169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.897334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.897355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.897506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.897527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.897711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.897733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.897900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.897922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.898047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.898070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.898230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.898252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.898423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.898445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.898701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.898723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.898967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.935 [2024-11-19 17:45:01.898989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.935 qpair failed and we were unable to recover it. 00:26:59.935 [2024-11-19 17:45:01.899106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.899129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.899345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.899367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.899490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.899511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.899774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.899796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.899902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.899924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.900102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.900125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.900291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.900313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.900541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.900563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.900726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.900755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.900843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.900865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.901033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.901056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.901337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.901358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.901469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.901490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.901567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.901590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.901812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.901834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.901969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.901992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.902221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.902243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.902337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.902358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.902638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.902659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.902745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.902767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.902913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.902935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.903028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.903050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.903217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.903238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.903399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.903420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.903597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.903618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.903761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.903782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.904005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.904030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.904252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.904274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.904368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.904389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.904518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.904759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.904780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.904888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.904911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.905159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.905182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.905357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.905379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.905533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.905554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.905730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.905752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.905935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.905964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.906048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.906070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.936 qpair failed and we were unable to recover it. 00:26:59.936 [2024-11-19 17:45:01.906167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.936 [2024-11-19 17:45:01.906189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.906304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.906325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.906421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.906443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.906524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.906545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.906703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.906725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.906971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.906993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.907164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.907186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.907305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.907326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.907428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.907450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.907555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.907576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.907744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.907766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.907880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.907902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.908014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.908037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.908190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.908211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.908393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.908414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.908565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.908588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.908744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.908766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.908924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.908946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.909107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.909129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.909227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.909248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.909419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.909441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.909619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.909640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.909802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.909823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.909988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.910011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.910177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.910199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.910290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.910312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.910504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.910526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.910687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.910709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.910803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.910825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.911857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.911883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.912060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.912083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.912234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.912255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.937 [2024-11-19 17:45:01.912422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.937 [2024-11-19 17:45:01.912447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.937 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.912539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.912560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.912738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.912760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.912911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.912932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.913111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.913134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.913234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.913256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.913413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.913434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.913597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.913618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.913801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.913823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.913997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.914019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.914175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.914197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.914369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.914390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.914556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.914578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.914728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.914750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.914925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.914964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.915164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.915303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.915419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.915597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.915768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.915884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.915981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.916184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.916283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.916463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.916592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.916690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.916899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.916924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.917083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.917105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.917273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.917295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.917449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.917471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.917702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.917723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.917826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.917848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.918006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.918029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.938 [2024-11-19 17:45:01.918113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.938 [2024-11-19 17:45:01.918135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.938 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.918355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.918377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.918459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.918480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.918723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.918745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.918963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.918986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.919150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.919171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.919279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.919300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.919471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.919493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.919712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.919734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.919899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.919920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.920057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.920079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.920247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.920269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.920525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.920547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.920730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.920752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.921031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.921207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.921398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.921585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.921703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.921824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.921982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.922005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.922103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.922125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.922280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.922302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.922450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.922471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.922550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.922571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.922723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.922744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.922988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.923011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.923162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.923183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.923358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.923392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.923549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.923570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.923654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.923675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.923852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.923874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.924065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.924088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.924347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.924369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.924521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.924707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.924729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.924965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.924988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.925079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.925100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.925251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.939 [2024-11-19 17:45:01.925272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.939 qpair failed and we were unable to recover it. 00:26:59.939 [2024-11-19 17:45:01.925436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.925457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.925558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.925579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.925743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.925765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.925864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.925885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.926869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.926890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.927888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.927986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.928009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.928248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.928375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.928396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.928576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.928597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.928749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.928775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.929013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.929036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.929151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.929172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.929267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.929288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.929437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.929458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.929676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.929697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.929925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.929960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.930057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.930080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.930297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.930319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.930483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.930504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.930746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.930768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.930872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.930893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.931069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.931092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.931336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.931358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.931530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.931552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.931794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.931817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.940 [2024-11-19 17:45:01.931984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.940 [2024-11-19 17:45:01.932008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.940 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.932104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.932126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.932306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.932329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.932520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.932541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.932708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.932730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.932898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.932919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.933108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.933131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.933244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.933265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.933438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.933459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.933570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.933591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.933806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.933828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.933979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.934005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.934087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.934108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.934287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.934308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.934470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.934491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.934650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.934671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.934789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.934800] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:26:59.941 [2024-11-19 17:45:01.934811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.934841] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.941 [2024-11-19 17:45:01.934982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.935109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.935276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.935441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.935550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.935717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.935890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.935911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.936098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.936120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.936224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.936246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.936344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.936365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.936464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.936486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.936703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.936724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.936890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.936912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.937085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.937108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.937303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.937325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.937492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.937513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.937615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.937638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.937734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.937754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.937914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.937935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.938052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.938073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.938174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.941 [2024-11-19 17:45:01.938200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.941 qpair failed and we were unable to recover it. 00:26:59.941 [2024-11-19 17:45:01.938386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.938408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.938575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.938596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.938748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.938769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.938867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.938889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.939070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.939093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.939221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.939383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.939405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.939621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.939643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.939741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.939762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.939908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.939929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.940129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.940152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.940299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.940321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.940470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.940492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.940651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.940672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.940853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.940874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.940967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.940990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.941238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.941260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.941361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.941383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.941484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.941506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.941665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.941686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.941780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.941801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.942087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.942109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.942212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.942234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.942477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.942499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.942652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.942673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.942836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.942859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.943041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.943068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.943237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.943259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.943438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.943460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.943649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.943670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.943842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.943863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.944023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.944045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.944281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.944302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.944471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.942 [2024-11-19 17:45:01.944492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.942 qpair failed and we were unable to recover it. 00:26:59.942 [2024-11-19 17:45:01.944588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.944609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.944766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.944787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.944935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.944963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.945934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.945976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.946133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.946319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.946341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.946498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.946519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.946688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.946710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.946808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.946829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.946983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.947160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.947345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.947472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.947590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.947781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.947966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.947989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.948086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.948107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.948275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.948296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.948460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.948481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.948641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.948662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.948758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.948780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.948886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.948908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.949066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.949087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.949322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.949343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.949454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.949580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.949601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.949776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.949797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.950027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.950049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.950210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.950232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.950384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.950405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.950502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.950524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.950759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.950781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.943 qpair failed and we were unable to recover it. 00:26:59.943 [2024-11-19 17:45:01.950863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.943 [2024-11-19 17:45:01.950885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.950978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.951089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.951222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.951457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.951584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.951772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.951958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.951981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.952086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.952108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.952277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.952299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.952462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.952483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.952641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.952662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.952821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.952842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.952938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.952966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.953155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.953176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.953396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.953417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.953652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.953674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.953834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.953855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.953941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.953970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.954097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.954118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.954284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.954306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.954485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.954507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.954663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.954689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.954836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.954857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.955873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.955895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.956046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.956069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.956222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.956243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.956332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.956354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.956597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.956619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.956790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.956812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.957039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.957062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.957227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.944 [2024-11-19 17:45:01.957248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.944 qpair failed and we were unable to recover it. 00:26:59.944 [2024-11-19 17:45:01.957334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.957355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.957581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.957602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.957746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.957767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.957867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.957889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.957994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.958015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.958169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.958191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.958356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.958377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.958466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.958487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.958644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.958665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.958829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.958850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.959084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.959106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.959202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.959229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.959448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.959470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.959628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.959649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.959800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.959821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.959928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.959955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.960918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.960940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.961127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.961246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.961365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.961561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.961707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.961886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.961985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.962203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.962398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.962530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.962631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.962761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.962878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.962899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.963000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.963023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.963184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.963206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.945 [2024-11-19 17:45:01.963365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.945 [2024-11-19 17:45:01.963390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.945 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.963570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.963591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.963779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.963801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.963956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.963978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.964196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.964218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.964328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.964349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.964515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.964537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.964706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.964727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.964995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.965172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.965356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.965476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.965594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.965718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.965893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.965915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.966120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.966143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.966294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.966316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.966475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.966496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.966643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.966665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.966816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.966837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.967808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.967829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.968068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.968091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.968211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.968233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.968407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.968429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.968528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.968549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.968713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.968737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.968894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.968916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.969051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.969250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.969392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.969566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.969673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.946 [2024-11-19 17:45:01.969842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.946 qpair failed and we were unable to recover it. 00:26:59.946 [2024-11-19 17:45:01.969930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.969972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.970194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.970215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.970531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.970603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4100000b90 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.970850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.970922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.971129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.971199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.971484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.971508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.971663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.971871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.971893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.972065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.972331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.972352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.972511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.972533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.972637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.972658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.972758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.972780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.972940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.972968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.973067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.973089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.973249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.973269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.973459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.973480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.973717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.973738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.973891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.973912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.974013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.974035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.974213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.974234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.974398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.974419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.974635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.974657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.974871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.974892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.975161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.975184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.975428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.975450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.975603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.975624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.975793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.975815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.976030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.976052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.976290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.976315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.976504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.976525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.976697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.976718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.976941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.976970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.977180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.977202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.947 qpair failed and we were unable to recover it. 00:26:59.947 [2024-11-19 17:45:01.977371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.947 [2024-11-19 17:45:01.977392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.977551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.977573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.977678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.977699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.977799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.977820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.977973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.977996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.978186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.978308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.978542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.978656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.978762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.978890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.978995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.979018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.979182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.979203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.979419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.979441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.979540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.979561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.979709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.979731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.979876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.979897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.979997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.980019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.980168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.980189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.980442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.980464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.980579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.980600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.980808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.980828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.981052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.981184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.981356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.981477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.981585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.981843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.981998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.982021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.982146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.982295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.982316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.982408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.982429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.982597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.982617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.982791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.982812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.983058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.983081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.983250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.983271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.983365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.983386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.983504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.983526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.948 [2024-11-19 17:45:01.983626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.948 [2024-11-19 17:45:01.983647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.948 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.983746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.983767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.983925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.983945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.984905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.984927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.985922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.985943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.986067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.986088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.986356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.986557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.986578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.986753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.986867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.986888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.987079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.987101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.987262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.987283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.987518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.987539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.987721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.987742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.987961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.987983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.988132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.988154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.988250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.988270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.988386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.988601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.988622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.988715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.988736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.988896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.988916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.989069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.989091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.989240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.989261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.989359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.989379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.989528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.989548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.989709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.989730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.989952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.989974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.949 qpair failed and we were unable to recover it. 00:26:59.949 [2024-11-19 17:45:01.990141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.949 [2024-11-19 17:45:01.990162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.990374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.990396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.990496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.990517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.990675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.990696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.990942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.990972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.991142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.991163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.991252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.991273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.991495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.991515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.991724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.991745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.991979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.992195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.992216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.992367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.992388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.992603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.992623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.992786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.992807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.992960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.992982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.993899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.993919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.994114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.994287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.994454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.994570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.994695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.994829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.994981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.995921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.995942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.996044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.996066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.996144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.996165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.996276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.950 [2024-11-19 17:45:01.996297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.950 qpair failed and we were unable to recover it. 00:26:59.950 [2024-11-19 17:45:01.996445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.996467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.996576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.996601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.996699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.996720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.996803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.996824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.996988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.997106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.997364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.997548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.997667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.997796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.997972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.997995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.998088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.998109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.998208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.998229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.998382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.998403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.998563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.998583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.998804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.998824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.998918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.998939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.999060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.999081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.999234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.999255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.999408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.999429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.999603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.999624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.999720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.999741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:01.999889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:01.999910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.000033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.000205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.000375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.000545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.000715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.000852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.000996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.951 [2024-11-19 17:45:02.001055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.001076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.001177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.001198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.001439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.001459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.001554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.001575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.951 [2024-11-19 17:45:02.001768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.951 [2024-11-19 17:45:02.001789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.951 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.001898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.001919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.002961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.002984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.003142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.003163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.003404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.003425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.003602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.003624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.003788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.003810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.004891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.004912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.005084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.005106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.005265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.005287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.005436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.005464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.005633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.005656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.005743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.005765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.005942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.005987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.006156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.006178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.006280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.006301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.006462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.006483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.006579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.006601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.006766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.006788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.006964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.006987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.007072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.007094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.007245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.007267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.007532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.007554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.007663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.007684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.007781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.007803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.007912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.007933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.008154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.952 [2024-11-19 17:45:02.008176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.952 qpair failed and we were unable to recover it. 00:26:59.952 [2024-11-19 17:45:02.008419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.008441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.008595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.008616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.008765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.008786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.008959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.008982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.009227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.009248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.009423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.009444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.009562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.009713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.009734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.009912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.009934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.010109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.010131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.010278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.010303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.010453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.010475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.010627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.010648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.010870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.010893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.011063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.011086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.011279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.011301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.011408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.011430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.011650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.011673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.011771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.011792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.011875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.011896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.012064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.012087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.012191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.012213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.012319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.012340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.012426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.012448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.012644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.012666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.012861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.012881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.013130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.013153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.013325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.013347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.013431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.013452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.013597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.013619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.013721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.013742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.013886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.013907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.014008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.014029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.014192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.014214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.014379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.014400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.014564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.014586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.014747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.014768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.014876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.014902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.953 qpair failed and we were unable to recover it. 00:26:59.953 [2024-11-19 17:45:02.015122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.953 [2024-11-19 17:45:02.015145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.015299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.015320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.015430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.015451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.015544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.015566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.015749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.015770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.015931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.015968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.016137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.016158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.016262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.016283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.016461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.016482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.016641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.016663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.016891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.016912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.017144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.017167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.017329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.017350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.017445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.017467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.017574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.017595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.017746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.017767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.017866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.017887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.018042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.018064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.018218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.018240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.018460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.018481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.018665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.018687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.018788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.018809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.018967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.018989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.019175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.019197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.019392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.019412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.019497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.019518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.019598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.019619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.019835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.019855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.020009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.020031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.020191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.020213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.020385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.020406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.020500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.020521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.020702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.020915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.020936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.021092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.021114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.021332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.021353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.021446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.021467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.021618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.021638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.954 [2024-11-19 17:45:02.021728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.954 [2024-11-19 17:45:02.021749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.954 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.021842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.021863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.021979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.022003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.022098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.022120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.022334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.022354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.022594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.022615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.022766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.022788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.022894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.022915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.023121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.023233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.023402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.023513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.023712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.023899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.023980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.024164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.024371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.024486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.024606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.024785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.024910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.025150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.025171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.025257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.025278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.025528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.025549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.025704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.025726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.025968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.025990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.026088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.026109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.026258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.026278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.026440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.026460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.026704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.026728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.026820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.026843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.026957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.026980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.027075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.027095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.027267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.027289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.027528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.027549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.027652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.027672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.027834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.027958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.027981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.028126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.955 [2024-11-19 17:45:02.028148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.955 qpair failed and we were unable to recover it. 00:26:59.955 [2024-11-19 17:45:02.028321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.028343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.028422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.028443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.028601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.028622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.028780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.028801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.028904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.028925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.029113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.029314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.029501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.029632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.029764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.029885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.029983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.030195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.030300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.030408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.030781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.030956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.030983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.031167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.031402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.031423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.031580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.031601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.031684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.031704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.031802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.031824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.031998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.032020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.032267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.032288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.032389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.032410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.032570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.032591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.032757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.032779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.032943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.032970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.033067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.033088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.033189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.033211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.033305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.033326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.033495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.033516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.033683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.956 [2024-11-19 17:45:02.033704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.956 qpair failed and we were unable to recover it. 00:26:59.956 [2024-11-19 17:45:02.033812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.033834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.033997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.034020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.034120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.034141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.034355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.034377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.034562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.034583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.034674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.034695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.034789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.034811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.034988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.035253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.035355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.035547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.035756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.035872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.035976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.035998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.036181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.036203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.036315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.036337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.036444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.036465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.036622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.036643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.036897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.036918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.037076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.037098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.037267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.037289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.037390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.037411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.037572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.037593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.037747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.037768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.038009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.038032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.038186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.038207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.038309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.038330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.038598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.038619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.038806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.038827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.038988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.039011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.039174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.039195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.039292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.039313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.039502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.039523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.039678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.039699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.039844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.039866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.040012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.040035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.040142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.040163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.040376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.040398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.957 [2024-11-19 17:45:02.040562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.957 [2024-11-19 17:45:02.040582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.957 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.040759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.040780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.040953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.040975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.041136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.041157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.041257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.041278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.041500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.041522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.041602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.041623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.041741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.041763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.041979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.042003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.042194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.042216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.042309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.042331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.042493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.042515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.042663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.042685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.042930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.042963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.043067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.043090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.043304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.043326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.043544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.043566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.043676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.043697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.043888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.043911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.044004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.044026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.044192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.044213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.044274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.958 [2024-11-19 17:45:02.044305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.958 [2024-11-19 17:45:02.044313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.958 [2024-11-19 17:45:02.044320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.958 [2024-11-19 17:45:02.044319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.044327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.958 [2024-11-19 17:45:02.044340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.044580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.044600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.044747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.044770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.044870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.044892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.045035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.045160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.045344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.045533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.045703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.045873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:59.958 [2024-11-19 17:45:02.045983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.046006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.045937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:59.958 [2024-11-19 17:45:02.046044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:59.958 [2024-11-19 17:45:02.046115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.046135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.046045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:59.958 [2024-11-19 17:45:02.046241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.046262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.046451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.046471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.958 [2024-11-19 17:45:02.046627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.958 [2024-11-19 17:45:02.046648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.958 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.046804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.046824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.046964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.047015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.047271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.047307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.047515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.047548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.047724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.047757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.047873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.047905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.048095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.048129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.048252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.048285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.048490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.048523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.048803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.048837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.048962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.048988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.049209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.049231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.049449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.049471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.049653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.049868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.049894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.050062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.050086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.050267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.050290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.050477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.050499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.050616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.050637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.050814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.050836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.050969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.050991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.051101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.051123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.051365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.051387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.051494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.051516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.051664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.051686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.051890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.051911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.052069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.052093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.052257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.052279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.052463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.052500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.052646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.052841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.052874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.053048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.053072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.053170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.053192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.053348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.053369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.053532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.053554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.053730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.053752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.053991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.054021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.959 [2024-11-19 17:45:02.054204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.959 [2024-11-19 17:45:02.054226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.959 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.054333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.054354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.054499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.054521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.054605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.054626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.054881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.054903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.055066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.055089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.055251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.055273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.055429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.055451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.055623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.055645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.055866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.055888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.056071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.056095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.056201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.056223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.056437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.056459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.056622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.056644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.056740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.056762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.056927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.056957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.057132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.057155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.057240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.057262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.057462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.057505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.057689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.057725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.057846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.057881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.058071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.058106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.058355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.058390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.058529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.058563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.058670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.058698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.058802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.058824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.059061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.059086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.059304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.059326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.059504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.059527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.059632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.059653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.059877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.059901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.060076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.060100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.060262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.060287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.060441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.060464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.060617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.960 [2024-11-19 17:45:02.060639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.960 qpair failed and we were unable to recover it. 00:26:59.960 [2024-11-19 17:45:02.060868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.060891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.061060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.061084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.061295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.061318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.061559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.061583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.061695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.061717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.061841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.062034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.062059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.062233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.062256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.062435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.062458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.062561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.062583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.062729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.062775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.062910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.062944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.063143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.063178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.063349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.063376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.063547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.063724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.063746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.063908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.063930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.064049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.064072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.064185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.064209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.064303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.064327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.064493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.064515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.064674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.064696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.064910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.064934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.065123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.065147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.065323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.065346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.065562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.065584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.065690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.065712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.065941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.065982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.066079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.066100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.066184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.066206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.066300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.066322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.066417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.066440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.066655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.066679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.066780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.066802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.067032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.067056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.067225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.067248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.067350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.067373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.067545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.961 [2024-11-19 17:45:02.067573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.961 qpair failed and we were unable to recover it. 00:26:59.961 [2024-11-19 17:45:02.067679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.067701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.067783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.067804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.068019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.068044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.068195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.068217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.068388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.068411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.068528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.068550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.068653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.068848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.068870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.069046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.069070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.069268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.069291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.069447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.069468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.069550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.069571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.069734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.069756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.069868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.069890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.070113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.070138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.070317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.070342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.070464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.070486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.070580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.070602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.070693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.070715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.070869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.070891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.071107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.071303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.071326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.071481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.071503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.071600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.071622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.071781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.071805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.071965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.072086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.072112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.072196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.072218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.072367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.072388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.072538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.072560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.072705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.072727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.072896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.072918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.073034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.073058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.073156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.073177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.073355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.073377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.073522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.073543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.073713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.073735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.073841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.073862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.962 qpair failed and we were unable to recover it. 00:26:59.962 [2024-11-19 17:45:02.074035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.962 [2024-11-19 17:45:02.074058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.074224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.074246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.074360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.074381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.074489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.074510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.074675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.074697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.074849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.074870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.075044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.075068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.075182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.075204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.075305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.075327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.075505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.075527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.075686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.075709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.075880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.076074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.076097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.076245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.076267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.076420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.076443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.076543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.076572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.076674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.076696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.076926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.076954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.077101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.077123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.077344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.077365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.077508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.077530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.077701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.077722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.077818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.077840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.077993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.078016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.078159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.078181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.078400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.078422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.078664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.078686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.078845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.078867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.079041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.079064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.079316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.079339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.079490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.079513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.079671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.079692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.079864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.079887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.080040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.080063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.080231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.080253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.080445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.080467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.080624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.080645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.080758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.080780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.080875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.080896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.081090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.081113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.081338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.963 [2024-11-19 17:45:02.081359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.963 qpair failed and we were unable to recover it. 00:26:59.963 [2024-11-19 17:45:02.081456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.081478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.081628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.081649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.081760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.081781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.081876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.081898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.082114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.082138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.082241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.082262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.082429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.082451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.082542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.082564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.082642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.082663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.082827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.082849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.083090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.083113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.083286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.083308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.083475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.083497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.083651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.083672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.083829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.083851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.083961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.083988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.084093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.084278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.084299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.084450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.084471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.084618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.084639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.084799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.084821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.085038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.085061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.085165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.085349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.085370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.085455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.085475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.085575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.085597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.085808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.085829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.086924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.086945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.087119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.087142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.087379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.087400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.087566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.087589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.087752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.087774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.087936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.087966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.088114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.088136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.964 [2024-11-19 17:45:02.088353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.964 [2024-11-19 17:45:02.088374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.964 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.088487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.088512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.088609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.088630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.088805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.088827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.089944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.089974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.090064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.090085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.090310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.090332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.090550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.090572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.090670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.090691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.090871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.090892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.091109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.091132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.091236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.091259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.091412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.091433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.091650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.091672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.091751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.091773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.091932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.091961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.092117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.092139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.092305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.092326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.092432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.092453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.092611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.092632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.092778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.092800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.092958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.092980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.093132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.093157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.093248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.093271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.093433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.093454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.093617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.093640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.093723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.093744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.093894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.093916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.094042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.094066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.965 [2024-11-19 17:45:02.094281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.965 [2024-11-19 17:45:02.094303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.965 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.094416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.094437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.094519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.094541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.094801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.094822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.094918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.094940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.095045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.095068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.095272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.095293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.095515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.095536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.095656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.095677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.095849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.096019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.096041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.096146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.096168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.096268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.096288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.096510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.096531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.096816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.096938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.096967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.097163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.097269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.097290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.097509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.097531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.097709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.097730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.097836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.097857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.098037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.098211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.098322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.098490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.098615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.098807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.098979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.099002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.966 [2024-11-19 17:45:02.099191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.966 [2024-11-19 17:45:02.099213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.966 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.099361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.099382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.099495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.099517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.099690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.099941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.099972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.100070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.100091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.100283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.100306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.100456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.100477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.100697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.100718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.100818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.100839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.100991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.101194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.101300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.101484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.101605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.101793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.101908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.101929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.102944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.102974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.103186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.103208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.103358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.103378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.103457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.103477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.103568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.103588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.103735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.103756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.103995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.104017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.104181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.104202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.104368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.104390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.104509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.104529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.104689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.104741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.104914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.104934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.105045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.105067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.105312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.105498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.105518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.967 qpair failed and we were unable to recover it. 00:26:59.967 [2024-11-19 17:45:02.105634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.967 [2024-11-19 17:45:02.105655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.105750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.105770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.105986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.106010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.106102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.106123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.106282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.106302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.106545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.106566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.106724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.106745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.106840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.106861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.107909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.107931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.108960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.108982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.109196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.109221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.109316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.109336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.109434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.109454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.109622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.109642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.109812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.109834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.109989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.110104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.110210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.110394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.110563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.110754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.110932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.110961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.111119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.111140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.111290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.111312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.111425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.111446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.111598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.111620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.968 [2024-11-19 17:45:02.111773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.968 [2024-11-19 17:45:02.111794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.968 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.112031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.112210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.112340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.112520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.112699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.112880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.112990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.113012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.113197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.113218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.113319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.113340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:26:59.969 [2024-11-19 17:45:02.113490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.969 [2024-11-19 17:45:02.113510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:26:59.969 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.113699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.113724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.113832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.113853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.113940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.113973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.114148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.114169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.114323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.114343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.114503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.114524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.114690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.114711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.114870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.114891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.115088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.115110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.115284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.115306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.115405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.115425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.115541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.115562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.115721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.115743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.115910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.115931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.116048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.116071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.116173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.116193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.116301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.116323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.116485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.116505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.244 qpair failed and we were unable to recover it. 00:27:00.244 [2024-11-19 17:45:02.116663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.244 [2024-11-19 17:45:02.116685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.116777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.116797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.116882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.116903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.117941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.117969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.118163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.118268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.118378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.118502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.118693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.118815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.118979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.119103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.119273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.119460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.119559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.119738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.119926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.119957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.120120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.120142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.120287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.120307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.120489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.120704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.120726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.120883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.120904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.120984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.121006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.121115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.121135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.121371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.121393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.121579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.121600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.121681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.121701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.121865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.121886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.122054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.122077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.122171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.122192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.122424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.122445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.245 [2024-11-19 17:45:02.122588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.245 [2024-11-19 17:45:02.122608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.245 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.122823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.122845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.122932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.122977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.123126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.123147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.123240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.123261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.123487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.123507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.123688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.123709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.123811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.123832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.123957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.124116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.124136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.124227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.124249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.124420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.124441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.124606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.124630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.124804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.124824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.124940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.124971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.125158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.125351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.125372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.125475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.125496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.125739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.125760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.125909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.125929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.126083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.126144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.126356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.126391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.126581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.126614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.126871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.126895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.127061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.127083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.127321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.127342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.127497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.127518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.127732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.127753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.127866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.127887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.128915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.128936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.129093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.129114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.129272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.129292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.129449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.246 [2024-11-19 17:45:02.129470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.246 qpair failed and we were unable to recover it. 00:27:00.246 [2024-11-19 17:45:02.129647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.129672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.129908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.129929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.130099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.130121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.130281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.130302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.130479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.130500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.130674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.130695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.130894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.130914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.131134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.131156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.131339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.131360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.131512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.131534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.131770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.131790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.131888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.131910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.132830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.133931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.133959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.134127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.134148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.134250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.134271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.134376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.134396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.134663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.134685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.134766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.134786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.134886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.134907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.135077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.135100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.135213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.135320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.135340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.135487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.135507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.135668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.135688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.135790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.247 [2024-11-19 17:45:02.135810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.247 qpair failed and we were unable to recover it. 00:27:00.247 [2024-11-19 17:45:02.136025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.136047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.136232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.136253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.136358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.136379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.136568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.136608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.136736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.136769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.136962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.136996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.137168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.137192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.137283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.137303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.137391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.137412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.248 [2024-11-19 17:45:02.137580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.137602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.137712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.137733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:00.248 [2024-11-19 17:45:02.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.137901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.137999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.138020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.138112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.248 [2024-11-19 17:45:02.138134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.138228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.138250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.138411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.248 [2024-11-19 17:45:02.138437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.138597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.138618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.248 [2024-11-19 17:45:02.138765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.138786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.138988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.139011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.139177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.139199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.139290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.139310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.139423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.139445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.139527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.139547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.139712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.139734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.139998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.140021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.140206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.140227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.140329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.140350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.140524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.140545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.140712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.140734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.140826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.140847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.141021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.141044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.141215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.141235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.141451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.141555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.141577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.141736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.141758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.141907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.248 [2024-11-19 17:45:02.141926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.248 qpair failed and we were unable to recover it. 00:27:00.248 [2024-11-19 17:45:02.142137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.142174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.142298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.142332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.142520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.142554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.142744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.142778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.142995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.143029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.143163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.143194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.143447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.143471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.143591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.143614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.143769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.143963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.143984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.144200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.144221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.144459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.144480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.144667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.144839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.144860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.145040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.145063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.145281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.145303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.145384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.145406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.145532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.145612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.145633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.145824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.145872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.146059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.146094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.146226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.146259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.146444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.146468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.146579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.146602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.146705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.146725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.146829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.146851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.249 qpair failed and we were unable to recover it. 00:27:00.249 [2024-11-19 17:45:02.147913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.249 [2024-11-19 17:45:02.147935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.148135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.148158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.148336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.148357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.148523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.148543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.148646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.148821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.148841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.149931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.149958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.150176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.150198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.150326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.150363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.150483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.150518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.150646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.150679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.150789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.150813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.150978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.151168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.151336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.151440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.151617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.151804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.151929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.151971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.152894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.152990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.153013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.153136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.153312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.153334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.153424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.153444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.153544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.153565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.250 [2024-11-19 17:45:02.153728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.250 [2024-11-19 17:45:02.153748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.250 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.153831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.153852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.154061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.154172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.154293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.154424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.154689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.154882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.154977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.155165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.155347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.155523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.155630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.155811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.155929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.155957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.156887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.156909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.157944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.157975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.158133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.158160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.158312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.158334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.158425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.158445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.158668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.158689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.158786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.158907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.158928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.159114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.251 [2024-11-19 17:45:02.159154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.251 qpair failed and we were unable to recover it. 00:27:00.251 [2024-11-19 17:45:02.159279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.159313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.159553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.159587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.159718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.159743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.159835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.159858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.159943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.159975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.160140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.160161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.160320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.160342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.160528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.160551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.160735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.160757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.160842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.160863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.161891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.161912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.162901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.162923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.163904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.163925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.164044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.164083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.164267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.164300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.164432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.164466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4104000b90 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.164647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.164671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.164770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.164791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.164937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.252 [2024-11-19 17:45:02.164968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.252 qpair failed and we were unable to recover it. 00:27:00.252 [2024-11-19 17:45:02.165061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.165889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.165978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.166914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.166935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.167898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.167997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.168897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.168994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.169017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.169194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.169215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.253 qpair failed and we were unable to recover it. 00:27:00.253 [2024-11-19 17:45:02.169377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.253 [2024-11-19 17:45:02.169398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.169486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.169509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.169622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.169644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.169744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.169766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.169874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.169895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.170941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.170970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.171138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.171159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.171302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.171323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.171413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.171434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.171564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.171605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.171729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.171762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.171867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.171901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.172889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.172910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.173919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.173940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.174033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.174055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.174135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 [2024-11-19 17:45:02.174158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.174238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.254 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.254 [2024-11-19 17:45:02.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.254 qpair failed and we were unable to recover it. 00:27:00.254 [2024-11-19 17:45:02.174357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.174377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.174545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.174566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:00.255 [2024-11-19 17:45:02.174652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.174677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.174780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.174926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.174956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.255 [2024-11-19 17:45:02.175108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.175210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.255 [2024-11-19 17:45:02.175386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.175498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.175597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.175706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.175844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.175864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.175979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.176935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.177921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.177942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.178833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.178854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.179023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.179045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.179202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.179224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.255 qpair failed and we were unable to recover it. 00:27:00.255 [2024-11-19 17:45:02.179320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.255 [2024-11-19 17:45:02.179341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.179509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.179530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.179638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.179659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.179769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.179790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.179873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.179895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.179988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.180956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.180977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.181887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.181908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.182961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.182983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.183912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.184054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.184077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.184163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.184184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.184368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.184389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.256 [2024-11-19 17:45:02.184541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.256 [2024-11-19 17:45:02.184562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.256 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.184657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.184678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.184762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.184783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.184962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.184984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.185955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.185978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.186891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.186913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.187888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.187914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.188968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.188990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.189158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.189179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.189270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.189292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.189448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.189470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.189654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.189675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.189836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.189857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.189938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.189968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.190136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.257 [2024-11-19 17:45:02.190157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.257 qpair failed and we were unable to recover it. 00:27:00.257 [2024-11-19 17:45:02.190262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.190283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.190434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.190456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.190623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.190642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.190802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.190822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.190986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.191915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.191937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.192063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.192088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.192249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.192270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.192368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.192388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.192600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.192620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.192824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.192845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.192961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.192983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.193086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.193237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.193405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.193594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.193719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.193891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.193992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.194105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.194278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.194397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.194567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.194807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.194910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.194932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.195091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.195113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.195222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.258 [2024-11-19 17:45:02.195477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.258 qpair failed and we were unable to recover it. 00:27:00.258 [2024-11-19 17:45:02.195575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.195596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.195758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.195779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.195999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.196240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.196345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.196525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.196712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.196827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.196927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.196957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.197103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.197124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.197363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.197385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.197617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.197638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.197741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.197761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.197957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.197979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.198106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.198274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.198394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.198667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.198767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.198891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.199176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.199299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.199408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.199597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.199776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.199945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.199974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.200941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.200970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.201119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.201140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.201221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.201243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.201395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.201416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.201656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.201676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.259 [2024-11-19 17:45:02.201961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.259 [2024-11-19 17:45:02.201983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.259 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.202067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.202088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.202253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.202274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.202423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.202444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.202620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.202642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.202749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.202770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.202920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.202940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.203101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.203289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.203310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.203420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.203441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.203531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.203552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.203772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.203793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.203890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.203910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.204093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.204117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.204225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.204246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.204406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.204428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.204644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.204666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.204830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.204853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.204945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.204974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.205055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.205076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.205170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.205192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.205365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.205387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.205486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.205509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.205657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.205678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.205822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.205844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.206030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.206054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.206208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.206230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.206452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.206474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.206690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.206712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.206861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.206882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.206990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.207169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.207348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.207519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.207709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.207836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.207973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.207997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.208086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.208107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.208204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.260 [2024-11-19 17:45:02.208226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.260 qpair failed and we were unable to recover it. 00:27:00.260 [2024-11-19 17:45:02.208318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.208339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.208506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.208529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.208770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.208791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.208971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.208995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.209102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.209124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.209208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.209230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.209387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.209408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.209508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.209529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.209748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.209770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.209920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.209942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.210177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.210199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.210352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.210375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.210626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.210649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.210807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.210830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.211049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.211072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.211245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.211266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.211359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.211380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.211593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.211614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.211765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.211787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.212056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.212079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.212253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.212274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.212427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.212448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.212611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.212631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.212777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.212803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.212956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.212979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.213077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.213098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.213245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.213266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.213490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.213511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.213616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.213636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.213720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.213741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.213899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.213921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.214096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.214119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.214268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.214289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 Malloc0 00:27:00.261 [2024-11-19 17:45:02.214473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.214495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.214579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.214599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.214760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.214781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.214955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.214978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 [2024-11-19 17:45:02.215065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.261 [2024-11-19 17:45:02.215086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.261 qpair failed and we were unable to recover it. 00:27:00.261 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.262 [2024-11-19 17:45:02.215272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.215293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.215377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.215398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:00.262 [2024-11-19 17:45:02.215549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.215569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.215713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.215734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.262 [2024-11-19 17:45:02.215882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.215904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.216065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.216088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.262 [2024-11-19 17:45:02.216269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.216291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.216395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.216416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.216592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.216613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.216855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.216876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.216979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.217002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.217166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.217187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.217399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.217421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.217585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.217606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.217784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.217805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.217974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.217995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.218090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.218112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.218278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.218299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.218466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.218487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.218586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.218607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.218790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.218811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.218970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.218992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.219235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.219257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.219344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.219366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.219548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.219572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.219741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.219762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.219861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.219882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.219995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.220018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.220203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.220224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.220387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.220408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.220494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.220516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.220753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.220773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.262 qpair failed and we were unable to recover it. 00:27:00.262 [2024-11-19 17:45:02.220922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.262 [2024-11-19 17:45:02.220942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.221127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.221148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.221306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.221328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.221424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.221445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.221663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.221684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.221797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.221817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.221889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.263 [2024-11-19 17:45:02.221911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.221932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.222154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.222176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.222360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.222381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.222477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.222498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.222719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.222740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.222901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.222922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.223014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.223036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.223138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.223159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.223391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.223412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.223511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.223533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.223637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.223659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.223842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.223864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.224026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.224048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.224225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.224246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.224463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.224484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.224669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.224691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.224770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.224792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.224881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.224902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.225049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.225071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.225238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.225260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.225357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.225378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.225535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.225556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.225652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.225673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.225909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.225930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.226023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.226044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.226271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.226293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.226438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.226460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.226624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.226645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.226812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.226833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.226971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.227117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.227136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.227323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.263 [2024-11-19 17:45:02.227344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.263 qpair failed and we were unable to recover it. 00:27:00.263 [2024-11-19 17:45:02.227452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.227474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.227734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.227756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.227883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.228960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.228982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.229079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.229101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.229255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.229276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.229441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.229461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.229563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.229585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.229763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.229783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.229932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.229960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.230110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.230226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.230247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.230411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.230431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.264 [2024-11-19 17:45:02.230602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.230789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.230890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.230915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:00.264 [2024-11-19 17:45:02.231014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.231036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.231203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.231224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.264 [2024-11-19 17:45:02.231339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.231360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.231505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.231526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.264 [2024-11-19 17:45:02.231635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.231656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.231822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.231843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.232785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.232805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.233050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.233073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.264 [2024-11-19 17:45:02.233175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.264 [2024-11-19 17:45:02.233196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.264 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.233290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.233312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.233468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.233489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.233576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.233597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.233818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.233839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.234049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.234291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.234312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.234458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.234480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.234716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.234737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.234817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.234838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.235016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.235038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.235203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.235223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.235371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.235392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.235549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.235570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.235716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.235737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.235833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.235854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.236093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.236190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.236211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.236381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.236404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.236585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.236606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.236697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.236717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.236869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.236890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.237956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.237978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.238056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.238078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.238173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.238194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.238357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.238378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.238583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.238605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.265 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.238720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.238768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.238966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.239002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.265 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 [2024-11-19 17:45:02.239126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.239158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.265 [2024-11-19 17:45:02.239407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.239439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.265 qpair failed and we were unable to recover it. 00:27:00.265 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.265 [2024-11-19 17:45:02.239686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.265 [2024-11-19 17:45:02.239717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.239908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.239939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f410c000b90 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.240201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.240225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.240388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.240409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.240505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.240526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.240622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.240643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.240791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.240812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.240969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.240991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.241093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.241115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.241263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.241284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.241365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.241387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.241501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.241522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.241693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.241715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.241905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.241926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.242181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.242202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.242365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.242386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.242480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.242501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.242596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.242822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.242843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.243061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.243083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.243229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.243250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.243431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.243622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.243644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.243725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.243746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.243922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.243943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.244185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.244207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.244358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.244383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.244534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.244554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.244667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.244688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.244937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.244968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.245176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.245197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.245306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.245327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.245406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.245428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.245572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.245593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.245750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.245771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.245958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.245980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.246135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.246157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.246312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.246332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.266 [2024-11-19 17:45:02.246425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.266 [2024-11-19 17:45:02.246445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.266 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.246611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.246633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.267 [2024-11-19 17:45:02.246809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.246831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.246938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.246967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.267 [2024-11-19 17:45:02.247142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.247164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.247312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.247333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.267 [2024-11-19 17:45:02.247431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.247453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.247533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.247554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.267 [2024-11-19 17:45:02.247713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.247735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.247901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.247922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.248139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.248161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.248259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.248279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.248385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.248818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.248838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.248956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.248978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.249896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.249917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.250086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.267 [2024-11-19 17:45:02.250108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9ba0 with addr=10.0.0.2, port=4420 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 [2024-11-19 17:45:02.250132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.267 [2024-11-19 17:45:02.252552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.267 [2024-11-19 17:45:02.252646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.267 [2024-11-19 17:45:02.252676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.267 [2024-11-19 17:45:02.252692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.267 [2024-11-19 17:45:02.252706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.267 [2024-11-19 17:45:02.252740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.267 [2024-11-19 17:45:02.262496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.267 [2024-11-19 17:45:02.262562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.267 [2024-11-19 17:45:02.262586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.267 [2024-11-19 17:45:02.262596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.267 [2024-11-19 17:45:02.262605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.267 [2024-11-19 17:45:02.262628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.267 qpair failed and we were unable to recover it. 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.267 17:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3615122 00:27:00.268 [2024-11-19 17:45:02.272494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.272556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.272572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.272580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.272587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.272602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.282494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.282565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.282580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.282587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.282594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.282608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.292490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.292554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.292572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.292580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.292586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.292601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.302478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.302535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.302552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.302560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.302567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.302583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.312505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.312565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.312579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.312587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.312594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.312609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.322535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.322594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.322608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.322615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.322622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.322637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.332584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.332655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.332671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.332678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.332687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.332703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.342576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.342635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.342649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.342657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.342663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.342678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.352635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.352684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.352698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.352705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.352712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.352727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.362681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.362742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.362757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.362764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.362771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.362786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.372726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.372787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.372801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.372809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.372816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.372831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.382692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.382767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.382782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.382789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.382795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.382810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.392729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.268 [2024-11-19 17:45:02.392781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.268 [2024-11-19 17:45:02.392795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.268 [2024-11-19 17:45:02.392802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.268 [2024-11-19 17:45:02.392809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.268 [2024-11-19 17:45:02.392824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.268 qpair failed and we were unable to recover it. 00:27:00.268 [2024-11-19 17:45:02.402761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.269 [2024-11-19 17:45:02.402817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.269 [2024-11-19 17:45:02.402831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.269 [2024-11-19 17:45:02.402838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.269 [2024-11-19 17:45:02.402844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.269 [2024-11-19 17:45:02.402859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.269 qpair failed and we were unable to recover it. 00:27:00.269 [2024-11-19 17:45:02.412829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.269 [2024-11-19 17:45:02.412885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.269 [2024-11-19 17:45:02.412900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.269 [2024-11-19 17:45:02.412907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.269 [2024-11-19 17:45:02.412914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.269 [2024-11-19 17:45:02.412929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.269 qpair failed and we were unable to recover it. 00:27:00.269 [2024-11-19 17:45:02.422792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.269 [2024-11-19 17:45:02.422871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.269 [2024-11-19 17:45:02.422890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.269 [2024-11-19 17:45:02.422897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.269 [2024-11-19 17:45:02.422903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.269 [2024-11-19 17:45:02.422918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.269 qpair failed and we were unable to recover it. 00:27:00.269 [2024-11-19 17:45:02.432869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.269 [2024-11-19 17:45:02.432925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.269 [2024-11-19 17:45:02.432940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.269 [2024-11-19 17:45:02.432952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.269 [2024-11-19 17:45:02.432959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.269 [2024-11-19 17:45:02.432974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.269 qpair failed and we were unable to recover it. 00:27:00.269 [2024-11-19 17:45:02.442878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.269 [2024-11-19 17:45:02.442932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.269 [2024-11-19 17:45:02.442952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.269 [2024-11-19 17:45:02.442959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.269 [2024-11-19 17:45:02.442966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.269 [2024-11-19 17:45:02.442981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.269 qpair failed and we were unable to recover it. 00:27:00.530 [2024-11-19 17:45:02.452861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.530 [2024-11-19 17:45:02.452912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.530 [2024-11-19 17:45:02.452926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.530 [2024-11-19 17:45:02.452934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.530 [2024-11-19 17:45:02.452941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.530 [2024-11-19 17:45:02.452963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.530 qpair failed and we were unable to recover it. 00:27:00.530 [2024-11-19 17:45:02.462929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.530 [2024-11-19 17:45:02.462995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.530 [2024-11-19 17:45:02.463009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.530 [2024-11-19 17:45:02.463016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.530 [2024-11-19 17:45:02.463026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.530 [2024-11-19 17:45:02.463041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.530 qpair failed and we were unable to recover it. 00:27:00.530 [2024-11-19 17:45:02.472983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.530 [2024-11-19 17:45:02.473053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.530 [2024-11-19 17:45:02.473069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.530 [2024-11-19 17:45:02.473076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.530 [2024-11-19 17:45:02.473083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.530 [2024-11-19 17:45:02.473098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.530 qpair failed and we were unable to recover it. 00:27:00.530 [2024-11-19 17:45:02.482970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.530 [2024-11-19 17:45:02.483041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.530 [2024-11-19 17:45:02.483056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.530 [2024-11-19 17:45:02.483063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.530 [2024-11-19 17:45:02.483069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.530 [2024-11-19 17:45:02.483085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.530 qpair failed and we were unable to recover it. 00:27:00.530 [2024-11-19 17:45:02.493060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.530 [2024-11-19 17:45:02.493170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.530 [2024-11-19 17:45:02.493186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.530 [2024-11-19 17:45:02.493193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.530 [2024-11-19 17:45:02.493200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.530 [2024-11-19 17:45:02.493216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.503048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.503118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.503133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.503140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.503146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.503160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.513071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.513136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.513151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.513158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.513165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.513179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.523111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.523169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.523185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.523193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.523200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.523216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.533125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.533197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.533212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.533219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.533225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.533240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.543161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.543212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.543227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.543234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.543242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.543257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.553179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.553233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.553251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.553258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.553265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.553279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.563246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.563305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.563319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.563326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.563333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.563347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.573208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.573285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.573300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.573307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.573313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.573328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.583275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.583329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.583343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.583350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.583357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.583371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.593253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.593305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.593320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.593326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.593337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.593352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.603330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.603390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.603404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.603412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.603418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.603432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.613326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.613389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.613403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.613410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.613417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.613431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.531 [2024-11-19 17:45:02.623381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.531 [2024-11-19 17:45:02.623433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.531 [2024-11-19 17:45:02.623447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.531 [2024-11-19 17:45:02.623455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.531 [2024-11-19 17:45:02.623461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.531 [2024-11-19 17:45:02.623476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.531 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.633392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.633451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.633465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.633473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.633479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.633493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.643433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.643494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.643509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.643516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.643522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.643537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.653467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.653522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.653536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.653543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.653550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.653564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.663481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.663537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.663551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.663559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.663565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.663580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.673518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.673570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.673583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.673590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.673597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.673613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.683542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.683623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.683642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.683649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.683656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.683671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.693578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.693638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.693655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.693662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.693668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.693684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.703622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.703699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.703714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.703721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.703728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.703743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.713640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.713702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.713717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.713724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.713730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.713745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.723662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.723723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.723738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.723745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.723754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.723769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.733683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.733736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.733751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.733758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.733764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.733780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.532 [2024-11-19 17:45:02.743718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.532 [2024-11-19 17:45:02.743767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.532 [2024-11-19 17:45:02.743781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.532 [2024-11-19 17:45:02.743789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.532 [2024-11-19 17:45:02.743795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.532 [2024-11-19 17:45:02.743811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.532 qpair failed and we were unable to recover it. 00:27:00.793 [2024-11-19 17:45:02.753730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.753783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.753798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.753805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.753812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.753827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.763771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.763825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.763839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.763846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.763853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.763869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.773797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.773852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.773867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.773875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.773881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.773896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.783807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.783874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.783888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.783895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.783901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.783916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.793836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.793887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.793901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.793908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.793915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.793930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.803882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.803940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.803959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.803966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.803973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.803988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.813904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.813978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.813999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.814007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.814013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.814029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.823897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.823952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.823967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.823974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.823981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.823996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.833966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.834023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.834036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.834044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.834050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.834065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.844022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.844121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.844134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.844141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.844148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.844163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.854019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.854074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.854088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.854095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.854105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.854120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.864043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.864115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.864129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.864137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.794 [2024-11-19 17:45:02.864143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.794 [2024-11-19 17:45:02.864157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.794 qpair failed and we were unable to recover it. 00:27:00.794 [2024-11-19 17:45:02.874029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.794 [2024-11-19 17:45:02.874096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.794 [2024-11-19 17:45:02.874110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.794 [2024-11-19 17:45:02.874117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.874124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.874138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.884120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.884179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.884193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.884200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.884206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.884222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.894143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.894204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.894223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.894232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.894238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.894255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.904088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.904143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.904158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.904165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.904172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.904187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.914192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.914252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.914266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.914274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.914280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.914295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.924225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.924283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.924297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.924304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.924311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.924326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.934303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.934361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.934375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.934382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.934389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.934404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.944277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.944331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.944348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.944355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.944361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.944376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.954302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.954366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.954382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.954389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.954396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.954411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.964388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.964445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.964461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.964469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.964476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.964491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.974409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.974462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.974477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.974484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.974491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.974506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.984374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.984431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.984446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.984453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.984463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.984478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:02.994433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:02.994489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:02.994505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:02.994512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.795 [2024-11-19 17:45:02.994519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.795 [2024-11-19 17:45:02.994535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.795 qpair failed and we were unable to recover it. 00:27:00.795 [2024-11-19 17:45:03.004490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.795 [2024-11-19 17:45:03.004591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.795 [2024-11-19 17:45:03.004605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.795 [2024-11-19 17:45:03.004612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.796 [2024-11-19 17:45:03.004619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:00.796 [2024-11-19 17:45:03.004634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.796 qpair failed and we were unable to recover it. 00:27:01.056 [2024-11-19 17:45:03.014403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.056 [2024-11-19 17:45:03.014458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.056 [2024-11-19 17:45:03.014472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.056 [2024-11-19 17:45:03.014479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.056 [2024-11-19 17:45:03.014486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.056 [2024-11-19 17:45:03.014500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.056 qpair failed and we were unable to recover it. 00:27:01.056 [2024-11-19 17:45:03.024441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.056 [2024-11-19 17:45:03.024492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.056 [2024-11-19 17:45:03.024507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.056 [2024-11-19 17:45:03.024514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.056 [2024-11-19 17:45:03.024521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.056 [2024-11-19 17:45:03.024536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.056 qpair failed and we were unable to recover it. 00:27:01.056 [2024-11-19 17:45:03.034474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.056 [2024-11-19 17:45:03.034556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.056 [2024-11-19 17:45:03.034570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.056 [2024-11-19 17:45:03.034578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.056 [2024-11-19 17:45:03.034583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.056 [2024-11-19 17:45:03.034598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.056 qpair failed and we were unable to recover it. 00:27:01.056 [2024-11-19 17:45:03.044594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.056 [2024-11-19 17:45:03.044651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.056 [2024-11-19 17:45:03.044665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.056 [2024-11-19 17:45:03.044672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.056 [2024-11-19 17:45:03.044679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.056 [2024-11-19 17:45:03.044694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.056 qpair failed and we were unable to recover it. 00:27:01.056 [2024-11-19 17:45:03.054711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.056 [2024-11-19 17:45:03.054778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.056 [2024-11-19 17:45:03.054793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.056 [2024-11-19 17:45:03.054800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.056 [2024-11-19 17:45:03.054807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.056 [2024-11-19 17:45:03.054822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.056 qpair failed and we were unable to recover it. 00:27:01.056 [2024-11-19 17:45:03.064637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.056 [2024-11-19 17:45:03.064715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.064730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.064738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.064745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.064761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.074610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.074664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.074682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.074689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.074695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.074710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.084727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.084784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.084798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.084805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.084811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.084825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.094738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.094798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.094813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.094821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.094827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.094843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.104732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.104787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.104802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.104809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.104816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.104831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.114808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.114903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.114917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.114925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.114934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.114956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.124852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.124958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.124972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.124979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.124986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.125002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.134812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.134888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.134902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.134910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.134915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.134930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.144835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.144935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.144956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.144963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.144970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.144985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.154913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.154974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.154989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.154996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.155002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.155016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.164929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.165032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.165047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.165054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.165062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.165077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.174970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.175029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.175044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.175051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.175058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.175074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.184955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.185004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.057 [2024-11-19 17:45:03.185019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.057 [2024-11-19 17:45:03.185026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.057 [2024-11-19 17:45:03.185032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.057 [2024-11-19 17:45:03.185047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.057 qpair failed and we were unable to recover it. 00:27:01.057 [2024-11-19 17:45:03.195012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.057 [2024-11-19 17:45:03.195090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.195105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.195112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.195119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.195134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.205027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.205094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.205112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.205119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.205125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.205141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.214988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.215049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.215064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.215071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.215077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.215092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.225113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.225172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.225187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.225194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.225200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.225215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.235127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.235209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.235224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.235231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.235237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.235251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.245155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.245217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.245233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.245244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.245252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.245267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.255171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.255228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.255242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.255249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.255255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.255269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.058 [2024-11-19 17:45:03.265202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.058 [2024-11-19 17:45:03.265255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.058 [2024-11-19 17:45:03.265269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.058 [2024-11-19 17:45:03.265276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.058 [2024-11-19 17:45:03.265283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.058 [2024-11-19 17:45:03.265298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.058 qpair failed and we were unable to recover it. 00:27:01.319 [2024-11-19 17:45:03.275163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.319 [2024-11-19 17:45:03.275218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.319 [2024-11-19 17:45:03.275232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.319 [2024-11-19 17:45:03.275239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.319 [2024-11-19 17:45:03.275246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.319 [2024-11-19 17:45:03.275261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.319 qpair failed and we were unable to recover it. 00:27:01.319 [2024-11-19 17:45:03.285196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.319 [2024-11-19 17:45:03.285254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.319 [2024-11-19 17:45:03.285268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.319 [2024-11-19 17:45:03.285276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.319 [2024-11-19 17:45:03.285282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.319 [2024-11-19 17:45:03.285297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.319 qpair failed and we were unable to recover it. 00:27:01.319 [2024-11-19 17:45:03.295281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.319 [2024-11-19 17:45:03.295334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.319 [2024-11-19 17:45:03.295348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.319 [2024-11-19 17:45:03.295355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.319 [2024-11-19 17:45:03.295361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.319 [2024-11-19 17:45:03.295376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.319 qpair failed and we were unable to recover it. 00:27:01.319 [2024-11-19 17:45:03.305309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.319 [2024-11-19 17:45:03.305363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.319 [2024-11-19 17:45:03.305378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.319 [2024-11-19 17:45:03.305386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.305393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.305408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.315333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.315429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.315443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.315450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.315457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.315472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.325369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.325427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.325442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.325449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.325456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.325471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.335374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.335433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.335451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.335459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.335465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.335479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.345421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.345471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.345485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.345493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.345499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.345515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.355449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.355502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.355517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.355524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.355531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.355546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.365526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.365581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.365595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.365602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.365609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.365625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.375507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.375563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.375576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.375587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.375594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.375608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.385527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.385581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.385595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.385602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.385609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.385624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.395557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.395610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.395624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.395631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.395637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.395653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.405589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.405650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.405663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.405670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.405677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.405691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.415630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.415683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.415698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.415704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.415711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.415726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.425645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.320 [2024-11-19 17:45:03.425699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.320 [2024-11-19 17:45:03.425714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.320 [2024-11-19 17:45:03.425721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.320 [2024-11-19 17:45:03.425728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.320 [2024-11-19 17:45:03.425743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.320 qpair failed and we were unable to recover it. 00:27:01.320 [2024-11-19 17:45:03.435705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.435759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.435773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.435780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.435787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.435802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.445719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.445774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.445789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.445795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.445803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.445818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.455741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.455794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.455808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.455816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.455822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.455837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.465771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.465830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.465852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.465859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.465866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.465881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.475818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.475877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.475892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.475900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.475906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.475922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.485750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.485806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.485820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.485827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.485834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.485849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.495906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.495968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.495983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.495991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.495997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.496012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.505881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.505939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.505960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.505970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.505977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.505992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.515969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.516070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.516085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.516092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.516099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.516115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.525937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.526034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.526051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.526059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.526066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.526082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.321 [2024-11-19 17:45:03.536006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.321 [2024-11-19 17:45:03.536066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.321 [2024-11-19 17:45:03.536080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.321 [2024-11-19 17:45:03.536087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.321 [2024-11-19 17:45:03.536094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.321 [2024-11-19 17:45:03.536109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.321 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.545996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.546048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.546062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.546070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.546077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.546092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.556037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.556092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.556106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.556114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.556121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.556138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.566061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.566117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.566132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.566139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.566145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.566160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.576075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.576129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.576143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.576150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.576157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.576171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.586105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.586173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.586188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.586195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.586202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.586216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.596193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.596300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.596314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.596321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.596328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.596343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.606182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.606257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.606271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.606278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.606284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.606299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.616257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.616358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.616374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.616383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.616390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.616405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.626254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.626311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.626325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.626332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.626338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.626353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.636271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.636327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.636340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.636351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.636357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.636372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.646291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.646351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.646366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.646373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.583 [2024-11-19 17:45:03.646380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.583 [2024-11-19 17:45:03.646396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.583 qpair failed and we were unable to recover it. 00:27:01.583 [2024-11-19 17:45:03.656311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.583 [2024-11-19 17:45:03.656363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.583 [2024-11-19 17:45:03.656377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.583 [2024-11-19 17:45:03.656385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.656391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.656406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.666332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.666386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.666400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.666408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.666416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.666430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.676321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.676373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.676387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.676394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.676400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.676415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.686399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.686454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.686468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.686475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.686482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.686497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.696416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.696479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.696492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.696500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.696506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.696521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.706444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.706493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.706507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.706515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.706522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.706536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.716451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.716508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.716522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.716529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.716535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.716549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.726497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.726562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.726578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.726586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.726593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.726608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.736528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.736584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.736598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.736605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.736612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.736627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.746565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.746622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.746636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.746644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.746651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.746666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.756587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.756641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.756657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.756665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.756671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.756688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.766612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.766688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.766703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.766713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.766719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.766734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.776641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.776698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.776712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.776720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.584 [2024-11-19 17:45:03.776727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.584 [2024-11-19 17:45:03.776743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.584 qpair failed and we were unable to recover it. 00:27:01.584 [2024-11-19 17:45:03.786670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.584 [2024-11-19 17:45:03.786723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.584 [2024-11-19 17:45:03.786737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.584 [2024-11-19 17:45:03.786745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.585 [2024-11-19 17:45:03.786751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.585 [2024-11-19 17:45:03.786766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.585 qpair failed and we were unable to recover it. 00:27:01.585 [2024-11-19 17:45:03.796741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.585 [2024-11-19 17:45:03.796799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.585 [2024-11-19 17:45:03.796813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.585 [2024-11-19 17:45:03.796821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.585 [2024-11-19 17:45:03.796827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.585 [2024-11-19 17:45:03.796842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.585 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.806725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.806782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.806796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.806803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.806810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.806825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.816764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.816817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.816831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.816838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.816844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.816860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.826781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.826839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.826854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.826861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.826867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.826883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.836814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.836872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.836886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.836894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.836900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.836915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.846874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.846937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.846958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.846965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.846973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.846989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.856883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.856945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.856965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.856972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.856978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.856993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.866897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.866956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.866971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.866978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.866985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.866999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.876926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.876982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.876997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.877004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.877010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.877025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.886974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.887030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.887043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.887051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.887057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.887072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.896998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.897105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.897122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.897133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.897140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.897156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.907068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.907175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.907190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.907199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.907206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.907222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.846 [2024-11-19 17:45:03.917046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.846 [2024-11-19 17:45:03.917099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.846 [2024-11-19 17:45:03.917114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.846 [2024-11-19 17:45:03.917121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.846 [2024-11-19 17:45:03.917127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.846 [2024-11-19 17:45:03.917142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.846 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.927081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.927138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.927154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.927162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.927169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.927184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.937103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.937163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.937178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.937185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.937191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.937210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.947086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.947144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.947158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.947165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.947172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.947187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.957177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.957234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.957249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.957256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.957263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.957279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.967234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.967339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.967353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.967360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.967366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.967381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.977234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.977291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.977304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.977311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.977318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.977331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.987251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.987313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.987327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.987334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.987340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.987354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:03.997280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:03.997366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:03.997381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:03.997388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:03.997394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:03.997408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:04.007340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:04.007442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:04.007456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:04.007463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:04.007470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:04.007485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:04.017291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:04.017378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:04.017391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:04.017398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:04.017405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:04.017419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:04.027369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:04.027422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:04.027436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:04.027447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:04.027453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:04.027468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:04.037405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:04.037458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:04.037472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:04.037479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:04.037486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:04.037500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.847 qpair failed and we were unable to recover it. 00:27:01.847 [2024-11-19 17:45:04.047413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.847 [2024-11-19 17:45:04.047471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.847 [2024-11-19 17:45:04.047485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.847 [2024-11-19 17:45:04.047492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.847 [2024-11-19 17:45:04.047499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.847 [2024-11-19 17:45:04.047513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.848 qpair failed and we were unable to recover it. 00:27:01.848 [2024-11-19 17:45:04.057485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.848 [2024-11-19 17:45:04.057581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.848 [2024-11-19 17:45:04.057595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.848 [2024-11-19 17:45:04.057601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.848 [2024-11-19 17:45:04.057608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:01.848 [2024-11-19 17:45:04.057625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.848 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.067486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.067544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.067558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.067565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.067572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.067590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.077525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.077582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.077596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.077604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.077611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.077625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.087514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.087590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.087604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.087611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.087617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.087633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.097585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.097641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.097655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.097662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.097668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.097683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.107591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.107648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.107662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.107670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.107676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.107690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.117625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.117683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.117697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.117704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.117711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.117726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.127662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.127727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.170 [2024-11-19 17:45:04.127741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.170 [2024-11-19 17:45:04.127748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.170 [2024-11-19 17:45:04.127755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.170 [2024-11-19 17:45:04.127770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.170 qpair failed and we were unable to recover it. 00:27:02.170 [2024-11-19 17:45:04.137680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.170 [2024-11-19 17:45:04.137731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.137747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.137754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.137761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.137777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.147687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.147736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.147751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.147758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.147764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.147781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.157732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.157792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.157807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.157817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.157824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.157838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.167731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.167787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.167801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.167808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.167815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.167829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.177802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.177861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.177875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.177883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.177890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.177905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.187816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.187873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.187888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.187895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.187902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.187918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.197842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.197904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.197919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.197926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.197932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.197955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.207876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.207933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.207951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.207958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.207965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.207980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.217930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.217987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.218001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.218008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.218015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.218030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.227928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.227984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.227998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.228005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.228012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.228026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.238001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.238061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.238075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.238082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.238088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.238103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.248012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.248089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.248103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.248110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.248116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.248130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.257956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.258012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.258026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.171 [2024-11-19 17:45:04.258033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.171 [2024-11-19 17:45:04.258040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.171 [2024-11-19 17:45:04.258054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.171 qpair failed and we were unable to recover it. 00:27:02.171 [2024-11-19 17:45:04.268048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.171 [2024-11-19 17:45:04.268102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.171 [2024-11-19 17:45:04.268116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.268123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.268130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.268144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.278086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.278136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.278151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.278158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.278164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.278179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.288123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.288190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.288204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.288214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.288221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.288235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.298147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.298201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.298215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.298222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.298229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.298244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.308171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.308226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.308239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.308246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.308253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.308267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.318239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.318298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.318312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.318320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.318326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.318340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.328236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.328296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.328310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.328318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.328324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.328342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.338253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.338336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.338354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.338363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.338370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.338388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.348288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.348359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.348375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.348382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.348388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.348404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.172 [2024-11-19 17:45:04.358383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.172 [2024-11-19 17:45:04.358441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.172 [2024-11-19 17:45:04.358456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.172 [2024-11-19 17:45:04.358463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.172 [2024-11-19 17:45:04.358469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.172 [2024-11-19 17:45:04.358485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.172 qpair failed and we were unable to recover it. 00:27:02.444 [2024-11-19 17:45:04.368273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.444 [2024-11-19 17:45:04.368332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.444 [2024-11-19 17:45:04.368349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.444 [2024-11-19 17:45:04.368357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.444 [2024-11-19 17:45:04.368363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.444 [2024-11-19 17:45:04.368380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.444 qpair failed and we were unable to recover it. 00:27:02.444 [2024-11-19 17:45:04.378333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.444 [2024-11-19 17:45:04.378405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.444 [2024-11-19 17:45:04.378422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.444 [2024-11-19 17:45:04.378429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.444 [2024-11-19 17:45:04.378435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.444 [2024-11-19 17:45:04.378452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.444 qpair failed and we were unable to recover it. 00:27:02.444 [2024-11-19 17:45:04.388383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.444 [2024-11-19 17:45:04.388441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.444 [2024-11-19 17:45:04.388457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.444 [2024-11-19 17:45:04.388464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.444 [2024-11-19 17:45:04.388471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.444 [2024-11-19 17:45:04.388491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.444 qpair failed and we were unable to recover it. 00:27:02.444 [2024-11-19 17:45:04.398415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.444 [2024-11-19 17:45:04.398467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.398481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.398488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.398495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.398509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.408444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.408502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.408516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.408524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.408531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.408545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.418490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.418560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.418574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.418589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.418595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.418611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.428462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.428556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.428571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.428577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.428583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.428599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.438545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.438598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.438612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.438619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.438626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.438640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.448587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.448656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.448671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.448678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.448684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.448700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.458595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.458652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.458667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.458675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.458682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.458700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.468621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.468676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.468691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.468698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.468705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.468719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.478668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.478721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.478735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.478742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.478749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.478764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.488712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.488769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.488783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.488790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.488797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.488811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.498718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.498774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.498789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.498796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.498803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.498818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.508690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.508747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.508761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.508768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.508775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.508790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.518777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.518855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.518870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.445 [2024-11-19 17:45:04.518877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.445 [2024-11-19 17:45:04.518883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.445 [2024-11-19 17:45:04.518898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.445 qpair failed and we were unable to recover it. 00:27:02.445 [2024-11-19 17:45:04.528775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.445 [2024-11-19 17:45:04.528853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.445 [2024-11-19 17:45:04.528868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.528875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.528882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.528897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.538796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.538851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.538865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.538873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.538879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.538894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.548772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.548836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.548850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.548861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.548867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.548882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.558823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.558876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.558892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.558900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.558907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.558923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.568920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.568985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.569001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.569009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.569015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.569031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.578888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.578953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.578967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.578974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.578981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.578999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.588967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.589024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.589039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.589047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.589054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.589072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.598930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.599000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.599015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.599022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.599029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.599044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.608972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.609028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.609044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.609051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.609058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.609074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.618988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.619049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.619064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.619071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.619078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.619094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.629044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.629102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.629117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.629124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.629131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.629147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.639047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.639109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.639123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.639131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.639138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.639152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.649128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.649195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.649210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.649217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.446 [2024-11-19 17:45:04.649223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.446 [2024-11-19 17:45:04.649238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.446 qpair failed and we were unable to recover it. 00:27:02.446 [2024-11-19 17:45:04.659152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.446 [2024-11-19 17:45:04.659240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.446 [2024-11-19 17:45:04.659254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.446 [2024-11-19 17:45:04.659262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.447 [2024-11-19 17:45:04.659268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.447 [2024-11-19 17:45:04.659282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.447 qpair failed and we were unable to recover it. 00:27:02.707 [2024-11-19 17:45:04.669185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.707 [2024-11-19 17:45:04.669286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.707 [2024-11-19 17:45:04.669300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.707 [2024-11-19 17:45:04.669308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.707 [2024-11-19 17:45:04.669315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.707 [2024-11-19 17:45:04.669331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.707 qpair failed and we were unable to recover it. 00:27:02.707 [2024-11-19 17:45:04.679230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.707 [2024-11-19 17:45:04.679287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.707 [2024-11-19 17:45:04.679301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.707 [2024-11-19 17:45:04.679311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.707 [2024-11-19 17:45:04.679318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.707 [2024-11-19 17:45:04.679333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.707 qpair failed and we were unable to recover it. 00:27:02.707 [2024-11-19 17:45:04.689283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.707 [2024-11-19 17:45:04.689386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.707 [2024-11-19 17:45:04.689400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.707 [2024-11-19 17:45:04.689407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.707 [2024-11-19 17:45:04.689414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.707 [2024-11-19 17:45:04.689429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.707 qpair failed and we were unable to recover it. 00:27:02.707 [2024-11-19 17:45:04.699251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.707 [2024-11-19 17:45:04.699309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.707 [2024-11-19 17:45:04.699322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.707 [2024-11-19 17:45:04.699329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.707 [2024-11-19 17:45:04.699337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.707 [2024-11-19 17:45:04.699351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.707 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.709312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.709370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.709384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.709391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.709398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.709412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.719331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.719388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.719404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.719411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.719418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.719437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.729368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.729426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.729440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.729447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.729454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.729468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.739330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.739384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.739398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.739405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.739411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.739426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.749421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.749488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.749502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.749509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.749516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.749531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.759386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.759444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.759460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.759469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.759475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.759490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.769484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.769550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.769566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.769574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.769580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.769596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.779522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.779579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.779593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.779600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.779607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.779622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.789536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.789595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.789609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.789617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.789623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.789637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.799559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.799617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.799631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.799639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.799645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.799660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.809624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.809718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.809732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.809743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.809749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.809765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.819665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.819725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.819739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.819746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.819752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.819767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.708 [2024-11-19 17:45:04.829649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.708 [2024-11-19 17:45:04.829704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.708 [2024-11-19 17:45:04.829718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.708 [2024-11-19 17:45:04.829726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.708 [2024-11-19 17:45:04.829732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.708 [2024-11-19 17:45:04.829747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.708 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.839678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.839730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.839744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.839750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.839757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.839772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.849648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.849702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.849716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.849723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.849729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.849747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.859766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.859829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.859844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.859851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.859857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.859872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.869786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.869857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.869872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.869879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.869885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.869900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.879824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.879876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.879889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.879896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.879903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.879917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.889834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.889901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.889916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.889923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.889929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.889944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.899860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.899914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.899932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.899940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.899955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.899972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.909907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.909968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.909983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.909990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.909997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.910012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.709 [2024-11-19 17:45:04.919907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.709 [2024-11-19 17:45:04.919964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.709 [2024-11-19 17:45:04.919978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.709 [2024-11-19 17:45:04.919986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.709 [2024-11-19 17:45:04.919992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.709 [2024-11-19 17:45:04.920008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.709 qpair failed and we were unable to recover it. 00:27:02.970 [2024-11-19 17:45:04.929973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.970 [2024-11-19 17:45:04.930033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.970 [2024-11-19 17:45:04.930047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.970 [2024-11-19 17:45:04.930055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.970 [2024-11-19 17:45:04.930061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.970 [2024-11-19 17:45:04.930076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.970 qpair failed and we were unable to recover it. 00:27:02.970 [2024-11-19 17:45:04.939970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.970 [2024-11-19 17:45:04.940025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.970 [2024-11-19 17:45:04.940039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.970 [2024-11-19 17:45:04.940051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.970 [2024-11-19 17:45:04.940058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.970 [2024-11-19 17:45:04.940073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.970 qpair failed and we were unable to recover it. 00:27:02.970 [2024-11-19 17:45:04.949989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.970 [2024-11-19 17:45:04.950048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.970 [2024-11-19 17:45:04.950062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.970 [2024-11-19 17:45:04.950069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.970 [2024-11-19 17:45:04.950076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.970 [2024-11-19 17:45:04.950091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.970 qpair failed and we were unable to recover it. 00:27:02.970 [2024-11-19 17:45:04.960066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.970 [2024-11-19 17:45:04.960134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:04.960148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:04.960156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:04.960162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:04.960178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:04.970063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:04.970134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:04.970151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:04.970158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:04.970164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:04.970181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:04.980082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:04.980147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:04.980162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:04.980170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:04.980176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:04.980194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:04.990103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:04.990165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:04.990179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:04.990187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:04.990194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:04.990209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.000135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.000185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.000198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.000205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.000212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.000227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.010152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.010224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.010238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.010245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.010251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.010267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.020122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.020178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.020192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.020199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.020205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.020220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.030220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.030271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.030285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.030292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.030299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.030314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.040251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.040307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.040321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.040328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.040335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.040349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.050293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.050361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.050375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.050382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.050388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.050403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.060399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.060464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.060478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.060485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.060492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.060506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.070370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.070474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.070492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.070499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.070506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.070521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.080398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.080450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.971 [2024-11-19 17:45:05.080465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.971 [2024-11-19 17:45:05.080472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.971 [2024-11-19 17:45:05.080479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.971 [2024-11-19 17:45:05.080495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.971 qpair failed and we were unable to recover it. 00:27:02.971 [2024-11-19 17:45:05.090431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.971 [2024-11-19 17:45:05.090490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.090504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.090511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.090518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.090532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.100459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.100561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.100575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.100582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.100589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.100604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.110369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.110424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.110438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.110445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.110452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.110471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.120469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.120526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.120540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.120547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.120554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.120569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.130507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.130562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.130576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.130583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.130589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.130604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.140510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.140611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.140625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.140632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.140639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.140654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.150553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.150604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.150618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.150626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.150632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.150647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.160587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.160637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.160651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.160658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.160665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.160680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.170616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.170716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.170730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.170737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.170744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.170760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:02.972 [2024-11-19 17:45:05.180641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.972 [2024-11-19 17:45:05.180698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.972 [2024-11-19 17:45:05.180714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.972 [2024-11-19 17:45:05.180722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.972 [2024-11-19 17:45:05.180729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:02.972 [2024-11-19 17:45:05.180745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:02.972 qpair failed and we were unable to recover it. 00:27:03.233 [2024-11-19 17:45:05.190681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.233 [2024-11-19 17:45:05.190762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.233 [2024-11-19 17:45:05.190777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.233 [2024-11-19 17:45:05.190785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.233 [2024-11-19 17:45:05.190791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.233 [2024-11-19 17:45:05.190807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.233 qpair failed and we were unable to recover it. 00:27:03.233 [2024-11-19 17:45:05.200746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.200803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.200824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.200832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.200838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.200854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.210795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.210847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.210862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.210869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.210876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.210891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.220756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.220816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.220830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.220837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.220844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.220858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.230782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.230835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.230849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.230857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.230863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.230879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.240805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.240861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.240875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.240883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.240890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.240908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.250849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.250906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.250919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.250926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.250933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.250953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.260879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.260935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.260955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.260963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.260970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.260985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.270895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.270967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.270981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.270988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.270995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.271009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.280930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.281018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.281032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.281039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.281045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.281060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.290957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.291014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.291028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.291035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.291042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.291057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.300981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.301039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.301053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.301060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.301067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.301081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.311013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.311085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.311099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.311106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.311113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.311127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.321041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.234 [2024-11-19 17:45:05.321097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.234 [2024-11-19 17:45:05.321111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.234 [2024-11-19 17:45:05.321119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.234 [2024-11-19 17:45:05.321126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.234 [2024-11-19 17:45:05.321140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.234 qpair failed and we were unable to recover it. 00:27:03.234 [2024-11-19 17:45:05.331048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.331105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.331122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.331130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.331136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.331150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.341107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.341161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.341175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.341182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.341189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.341204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.351088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.351143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.351156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.351163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.351170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.351184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.361212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.361265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.361279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.361286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.361293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.361307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.371193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.371265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.371281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.371288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.371294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.371313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.381240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.381301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.381315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.381322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.381328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.381343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.391237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.391296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.391312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.391320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.391327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.391342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.401274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.401326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.401341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.401348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.401354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.401369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.411317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.411372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.411387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.411394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.411400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.411415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.421336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.421421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.421436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.421443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.421449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.421464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.431357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.431412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.431427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.431435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.431441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.431457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.235 [2024-11-19 17:45:05.441391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.235 [2024-11-19 17:45:05.441445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.235 [2024-11-19 17:45:05.441459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.235 [2024-11-19 17:45:05.441465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.235 [2024-11-19 17:45:05.441472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.235 [2024-11-19 17:45:05.441486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.235 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.451425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.451493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.451508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.451515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.451521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.451536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.461475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.461532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.461549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.461557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.461563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.461578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.471475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.471525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.471540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.471547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.471554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.471569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.481505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.481562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.481575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.481582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.481589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.481603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.491556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.491616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.491630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.491638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.491644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.491659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.501568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.501619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.501634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.501641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.501652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.501667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.511617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.511676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.511689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.511697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.497 [2024-11-19 17:45:05.511703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.497 [2024-11-19 17:45:05.511718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.497 qpair failed and we were unable to recover it. 00:27:03.497 [2024-11-19 17:45:05.521640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.497 [2024-11-19 17:45:05.521695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.497 [2024-11-19 17:45:05.521709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.497 [2024-11-19 17:45:05.521716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.521723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.521737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.531666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.531733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.531749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.531757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.531763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.531780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.541730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.541788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.541802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.541809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.541815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.541830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.551732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.551789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.551803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.551811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.551817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.551832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.561741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.561797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.561811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.561818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.561825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.561840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.571706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.571760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.571774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.571781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.571788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.571803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.581847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.581903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.581918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.581925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.581932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.581953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.591824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.591879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.591899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.591907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.591913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.591930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.601860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.601916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.601931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.601938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.601945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.601964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.611899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.611960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.611975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.611983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.611989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.612004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.621929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.621999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.622013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.622020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.622027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.622042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.631949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.632004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.632018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.632026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.632035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.632051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.641944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.642008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.642023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.642030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.498 [2024-11-19 17:45:05.642037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.498 [2024-11-19 17:45:05.642051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.498 qpair failed and we were unable to recover it. 00:27:03.498 [2024-11-19 17:45:05.652014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.498 [2024-11-19 17:45:05.652071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.498 [2024-11-19 17:45:05.652085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.498 [2024-11-19 17:45:05.652092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.652099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.652114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.499 [2024-11-19 17:45:05.662035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.499 [2024-11-19 17:45:05.662092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.499 [2024-11-19 17:45:05.662107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.499 [2024-11-19 17:45:05.662114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.662121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.662136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.499 [2024-11-19 17:45:05.672040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.499 [2024-11-19 17:45:05.672092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.499 [2024-11-19 17:45:05.672107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.499 [2024-11-19 17:45:05.672116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.672123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.672138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.499 [2024-11-19 17:45:05.682099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.499 [2024-11-19 17:45:05.682200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.499 [2024-11-19 17:45:05.682214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.499 [2024-11-19 17:45:05.682222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.682230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.682245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.499 [2024-11-19 17:45:05.692125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.499 [2024-11-19 17:45:05.692180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.499 [2024-11-19 17:45:05.692194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.499 [2024-11-19 17:45:05.692201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.692208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.692223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.499 [2024-11-19 17:45:05.702102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.499 [2024-11-19 17:45:05.702159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.499 [2024-11-19 17:45:05.702173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.499 [2024-11-19 17:45:05.702180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.702187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.702201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.499 [2024-11-19 17:45:05.712185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.499 [2024-11-19 17:45:05.712243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.499 [2024-11-19 17:45:05.712260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.499 [2024-11-19 17:45:05.712268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.499 [2024-11-19 17:45:05.712275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.499 [2024-11-19 17:45:05.712290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.499 qpair failed and we were unable to recover it. 00:27:03.760 [2024-11-19 17:45:05.722247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.760 [2024-11-19 17:45:05.722306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.760 [2024-11-19 17:45:05.722324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.760 [2024-11-19 17:45:05.722332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.760 [2024-11-19 17:45:05.722338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.760 [2024-11-19 17:45:05.722353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.760 qpair failed and we were unable to recover it. 00:27:03.760 [2024-11-19 17:45:05.732238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.760 [2024-11-19 17:45:05.732297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.760 [2024-11-19 17:45:05.732311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.760 [2024-11-19 17:45:05.732319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.760 [2024-11-19 17:45:05.732326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.760 [2024-11-19 17:45:05.732341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.760 qpair failed and we were unable to recover it. 00:27:03.760 [2024-11-19 17:45:05.742276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.760 [2024-11-19 17:45:05.742334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.760 [2024-11-19 17:45:05.742348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.760 [2024-11-19 17:45:05.742356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.742363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.742378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.752292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.752349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.752363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.752371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.752377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.752392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.762316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.762373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.762388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.762395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.762405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.762420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.772301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.772358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.772373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.772380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.772386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.772401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.782319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.782378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.782393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.782400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.782407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.782422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.792403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.792454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.792468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.792476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.792483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.792498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.802430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.802487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.802503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.802512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.802519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.802536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.812459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.812519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.812533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.812541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.812547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.812563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.822421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.822501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.822518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.822525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.822531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.822548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.832521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.832601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.832615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.832622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.832630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.832645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.842523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.842609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.842623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.842631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.842637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.842651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.852569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.852631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.852652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.852659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.852665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.852680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.862527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.862585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.862599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.761 [2024-11-19 17:45:05.862606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.761 [2024-11-19 17:45:05.862613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.761 [2024-11-19 17:45:05.862628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.761 qpair failed and we were unable to recover it. 00:27:03.761 [2024-11-19 17:45:05.872559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.761 [2024-11-19 17:45:05.872622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.761 [2024-11-19 17:45:05.872636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.872643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.872649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.872664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.882687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.882745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.882759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.882766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.882772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.882787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.892719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.892779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.892793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.892800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.892810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.892826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.902643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.902699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.902716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.902724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.902731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.902747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.912672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.912724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.912739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.912747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.912753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.912768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.922761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.922830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.922846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.922853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.922859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.922875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.932827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.932886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.932902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.932910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.932917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.932932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.942846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.942902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.942917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.942925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.942932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.942953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.952922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.953028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.953043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.953051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.953057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.953072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.962934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.963014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.963031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.963038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.963044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.963060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:03.762 [2024-11-19 17:45:05.972993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.762 [2024-11-19 17:45:05.973081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.762 [2024-11-19 17:45:05.973097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.762 [2024-11-19 17:45:05.973104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.762 [2024-11-19 17:45:05.973111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:03.762 [2024-11-19 17:45:05.973127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.762 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:05.982963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:05.983039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:05.983058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:05.983065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:05.983071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:05.983087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:05.993004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:05.993057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:05.993073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:05.993081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:05.993087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:05.993103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.003043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.003098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.003113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.003121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.003127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.003143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.013038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.013099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.013115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.013122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.013129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.013144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.023079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.023135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.023150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.023157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.023167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.023183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.033185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.033242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.033256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.033264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.033270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.033286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.043092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.043146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.043161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.043168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.043175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.043190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.053183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.053242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.053256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.053263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.053270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.053286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-19 17:45:06.063182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.024 [2024-11-19 17:45:06.063253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.024 [2024-11-19 17:45:06.063267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.024 [2024-11-19 17:45:06.063276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.024 [2024-11-19 17:45:06.063281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.024 [2024-11-19 17:45:06.063296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.073140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.073195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.073209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.073216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.073222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.073237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.083238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.083295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.083310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.083318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.083324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.083339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.093194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.093248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.093264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.093272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.093279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.093294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.103308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.103370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.103384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.103392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.103398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.103413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.113300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.113352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.113370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.113377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.113383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.113399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.123334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.123385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.123400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.123407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.123415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.123430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.133354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.133409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.133423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.133430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.133437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.133452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.143343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.143399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.143413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.143420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.143426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.143441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.153373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.153425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.153439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.153447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.153457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.153472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.163429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.163486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.163501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.163508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.163514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.163529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.173501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.173601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.173615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.173622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.173629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.173643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.183499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.183556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.183570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.183577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.183584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.183598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-19 17:45:06.193480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.025 [2024-11-19 17:45:06.193554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.025 [2024-11-19 17:45:06.193570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.025 [2024-11-19 17:45:06.193577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.025 [2024-11-19 17:45:06.193584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.025 [2024-11-19 17:45:06.193599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-19 17:45:06.203490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.026 [2024-11-19 17:45:06.203567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.026 [2024-11-19 17:45:06.203581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.026 [2024-11-19 17:45:06.203589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.026 [2024-11-19 17:45:06.203595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.026 [2024-11-19 17:45:06.203610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-19 17:45:06.213532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.026 [2024-11-19 17:45:06.213587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.026 [2024-11-19 17:45:06.213602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.026 [2024-11-19 17:45:06.213609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.026 [2024-11-19 17:45:06.213617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.026 [2024-11-19 17:45:06.213631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-19 17:45:06.223670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.026 [2024-11-19 17:45:06.223731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.026 [2024-11-19 17:45:06.223747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.026 [2024-11-19 17:45:06.223754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.026 [2024-11-19 17:45:06.223761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.026 [2024-11-19 17:45:06.223776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-19 17:45:06.233661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.026 [2024-11-19 17:45:06.233722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.026 [2024-11-19 17:45:06.233737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.026 [2024-11-19 17:45:06.233744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.026 [2024-11-19 17:45:06.233751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.026 [2024-11-19 17:45:06.233766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.243687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.243768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.243788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.243795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.243801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.243817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.253724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.253778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.253793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.253800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.253807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.253822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.263678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.263732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.263746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.263754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.263760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.263776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.273782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.273835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.273849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.273857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.273864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.273879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.283854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.283911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.283926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.283934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.283944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.283965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.293859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.293931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.293946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.293961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.293968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.293983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.303866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.303920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.303934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.303941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.303952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.303968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.313889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.313939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.313957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.313964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.313971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.313986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.323953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.324045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.324059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.324066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.324072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.324088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.333963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.334021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.334035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.334042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.334049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.334064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.344043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.344101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.344115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.287 [2024-11-19 17:45:06.344123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.287 [2024-11-19 17:45:06.344130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.287 [2024-11-19 17:45:06.344145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.287 qpair failed and we were unable to recover it. 00:27:04.287 [2024-11-19 17:45:06.354048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.287 [2024-11-19 17:45:06.354106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.287 [2024-11-19 17:45:06.354121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.354128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.354135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.354150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.364032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.364084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.364098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.364105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.364112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.364127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.374007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.374092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.374110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.374117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.374123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.374138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.384089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.384144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.384159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.384167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.384174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.384189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.394101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.394154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.394168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.394175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.394182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.394197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.404161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.404213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.404228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.404235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.404242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.404257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.414193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.414252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.414266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.414274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.414285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.414301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.424137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.424196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.424212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.424219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.424226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.424241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.434167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.434218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.434233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.434240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.434246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.434261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.444232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.444331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.444346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.444353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.444359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.444375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.454255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.454313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.454328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.454335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.454342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.454357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.464259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.464311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.464325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.464332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.464338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.464353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.474311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.474379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.474393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.474401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.288 [2024-11-19 17:45:06.474408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.288 [2024-11-19 17:45:06.474424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.288 qpair failed and we were unable to recover it. 00:27:04.288 [2024-11-19 17:45:06.484340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.288 [2024-11-19 17:45:06.484411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.288 [2024-11-19 17:45:06.484426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.288 [2024-11-19 17:45:06.484432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.289 [2024-11-19 17:45:06.484438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.289 [2024-11-19 17:45:06.484453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.289 qpair failed and we were unable to recover it. 00:27:04.289 [2024-11-19 17:45:06.494341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.289 [2024-11-19 17:45:06.494398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.289 [2024-11-19 17:45:06.494412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.289 [2024-11-19 17:45:06.494419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.289 [2024-11-19 17:45:06.494426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.289 [2024-11-19 17:45:06.494441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.289 qpair failed and we were unable to recover it. 00:27:04.289 [2024-11-19 17:45:06.504385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.549 [2024-11-19 17:45:06.504473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.549 [2024-11-19 17:45:06.504496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.549 [2024-11-19 17:45:06.504503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.549 [2024-11-19 17:45:06.504510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.549 [2024-11-19 17:45:06.504525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.549 qpair failed and we were unable to recover it. 00:27:04.549 [2024-11-19 17:45:06.514404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.549 [2024-11-19 17:45:06.514458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.549 [2024-11-19 17:45:06.514472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.549 [2024-11-19 17:45:06.514480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.549 [2024-11-19 17:45:06.514487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.549 [2024-11-19 17:45:06.514501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.549 qpair failed and we were unable to recover it. 00:27:04.549 [2024-11-19 17:45:06.524457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.549 [2024-11-19 17:45:06.524511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.549 [2024-11-19 17:45:06.524525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.549 [2024-11-19 17:45:06.524532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.549 [2024-11-19 17:45:06.524539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.549 [2024-11-19 17:45:06.524554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.549 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.534513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.534615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.534630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.534637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.534643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.534658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.544571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.544626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.544640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.544647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.544657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.544672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.554507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.554615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.554629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.554636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.554643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.554658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.564612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.564717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.564730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.564738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.564744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.564759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.574702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.574797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.574811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.574819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.574825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.574840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.584676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.584733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.584748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.584755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.584762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.584776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.594644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.594699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.594713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.594720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.594727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.594742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.604781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.604883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.604897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.604904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.604911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.604926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.614766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.614832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.614846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.614855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.614862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.614876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.624790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.624847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.624862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.624869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.624876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.624890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.634813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.634867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.634885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.634892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.634898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.634913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.644810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.644863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.644876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.644884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.644891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.644906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.654807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.550 [2024-11-19 17:45:06.654871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.550 [2024-11-19 17:45:06.654885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.550 [2024-11-19 17:45:06.654892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.550 [2024-11-19 17:45:06.654899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.550 [2024-11-19 17:45:06.654914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.550 qpair failed and we were unable to recover it. 00:27:04.550 [2024-11-19 17:45:06.664879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.664957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.664971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.664978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.664985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.665001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.674870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.674967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.674984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.674991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.675002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.675017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.684985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.685077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.685091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.685097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.685104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.685118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.694970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.695026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.695040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.695047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.695054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.695069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.704980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.705035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.705049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.705056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.705064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.705079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.715048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.715113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.715127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.715134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.715141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.715156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.725079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.725136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.725151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.725158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.725165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.725180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.735124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.735182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.735197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.735204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.735211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.735226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.745138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.745195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.745210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.745217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.745224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.745238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.755191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.755256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.755270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.755278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.755284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.755299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.551 [2024-11-19 17:45:06.765135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.551 [2024-11-19 17:45:06.765191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.551 [2024-11-19 17:45:06.765208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.551 [2024-11-19 17:45:06.765216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.551 [2024-11-19 17:45:06.765222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.551 [2024-11-19 17:45:06.765237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.551 qpair failed and we were unable to recover it. 00:27:04.812 [2024-11-19 17:45:06.775160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.812 [2024-11-19 17:45:06.775222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.812 [2024-11-19 17:45:06.775237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.812 [2024-11-19 17:45:06.775245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.812 [2024-11-19 17:45:06.775251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.812 [2024-11-19 17:45:06.775267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.812 qpair failed and we were unable to recover it. 00:27:04.812 [2024-11-19 17:45:06.785193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.812 [2024-11-19 17:45:06.785246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.812 [2024-11-19 17:45:06.785262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.812 [2024-11-19 17:45:06.785269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.812 [2024-11-19 17:45:06.785276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.812 [2024-11-19 17:45:06.785291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.812 qpair failed and we were unable to recover it. 00:27:04.812 [2024-11-19 17:45:06.795270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.812 [2024-11-19 17:45:06.795364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.812 [2024-11-19 17:45:06.795379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.812 [2024-11-19 17:45:06.795387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.812 [2024-11-19 17:45:06.795393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.812 [2024-11-19 17:45:06.795409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.812 qpair failed and we were unable to recover it. 00:27:04.812 [2024-11-19 17:45:06.805303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.812 [2024-11-19 17:45:06.805357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.812 [2024-11-19 17:45:06.805371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.812 [2024-11-19 17:45:06.805378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.812 [2024-11-19 17:45:06.805388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.812 [2024-11-19 17:45:06.805403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.812 qpair failed and we were unable to recover it. 00:27:04.812 [2024-11-19 17:45:06.815406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.812 [2024-11-19 17:45:06.815464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.812 [2024-11-19 17:45:06.815478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.812 [2024-11-19 17:45:06.815485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.812 [2024-11-19 17:45:06.815492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.815506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.825368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.825420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.825434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.825442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.825448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.825463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.835406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.835480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.835494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.835501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.835507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.835522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.845425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.845476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.845492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.845500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.845506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.845521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.855375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.855435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.855450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.855457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.855463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.855478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.865402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.865496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.865510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.865518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.865524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.865539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.875500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.875551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.875565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.875572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.875579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.875595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.885535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.885593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.885607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.885615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.885622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.885637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.895568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.895623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.895642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.895650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.895657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.895673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.905617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.905696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.905713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.905720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.905727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.905742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.915626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.915681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.915696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.915704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.915710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.915725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.925651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.925704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.925719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.925726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.925733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.925748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.935621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.935687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.935701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.935708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.935717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.935732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.813 [2024-11-19 17:45:06.945705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.813 [2024-11-19 17:45:06.945767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.813 [2024-11-19 17:45:06.945784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.813 [2024-11-19 17:45:06.945791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.813 [2024-11-19 17:45:06.945797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.813 [2024-11-19 17:45:06.945813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.813 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:06.955754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:06.955810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:06.955824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:06.955831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:06.955838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:06.955853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:06.965726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:06.965813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:06.965829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:06.965837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:06.965845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:06.965860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:06.975811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:06.975865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:06.975881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:06.975888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:06.975895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:06.975910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:06.985748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:06.985807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:06.985821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:06.985829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:06.985836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:06.985850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:06.995860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:06.995909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:06.995924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:06.995931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:06.995938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:06.995960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:07.005921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:07.005975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:07.005989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:07.005997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:07.006004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:07.006019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:07.015927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:07.016013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:07.016028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:07.016035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:07.016041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:07.016055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:04.814 [2024-11-19 17:45:07.025960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.814 [2024-11-19 17:45:07.026016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.814 [2024-11-19 17:45:07.026034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.814 [2024-11-19 17:45:07.026042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.814 [2024-11-19 17:45:07.026048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:04.814 [2024-11-19 17:45:07.026063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.814 qpair failed and we were unable to recover it. 00:27:05.075 [2024-11-19 17:45:07.036002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.075 [2024-11-19 17:45:07.036083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.075 [2024-11-19 17:45:07.036098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.075 [2024-11-19 17:45:07.036105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.075 [2024-11-19 17:45:07.036111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.075 [2024-11-19 17:45:07.036126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.075 qpair failed and we were unable to recover it. 00:27:05.075 [2024-11-19 17:45:07.046006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.075 [2024-11-19 17:45:07.046063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.075 [2024-11-19 17:45:07.046079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.075 [2024-11-19 17:45:07.046088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.075 [2024-11-19 17:45:07.046095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.075 [2024-11-19 17:45:07.046110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.075 qpair failed and we were unable to recover it. 00:27:05.075 [2024-11-19 17:45:07.056082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.075 [2024-11-19 17:45:07.056190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.075 [2024-11-19 17:45:07.056206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.075 [2024-11-19 17:45:07.056213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.075 [2024-11-19 17:45:07.056219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.075 [2024-11-19 17:45:07.056235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.075 qpair failed and we were unable to recover it. 00:27:05.075 [2024-11-19 17:45:07.066072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.075 [2024-11-19 17:45:07.066129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.075 [2024-11-19 17:45:07.066143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.075 [2024-11-19 17:45:07.066150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.075 [2024-11-19 17:45:07.066160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.075 [2024-11-19 17:45:07.066176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.075 qpair failed and we were unable to recover it. 00:27:05.075 [2024-11-19 17:45:07.076102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.075 [2024-11-19 17:45:07.076161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.075 [2024-11-19 17:45:07.076175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.075 [2024-11-19 17:45:07.076182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.075 [2024-11-19 17:45:07.076189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.076203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.086042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.086130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.086144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.086151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.086157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.086172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.096165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.096242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.096257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.096263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.096270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.096284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.106165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.106253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.106268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.106275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.106281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.106297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.116166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.116223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.116238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.116245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.116252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.116266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.126231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.126289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.126303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.126310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.126318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.126333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.136202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.136256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.136270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.136277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.136284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.136299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.146225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.146279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.146294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.146301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.146308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.146324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.156334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.156398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.156419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.156427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.156433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.156448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.166255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.166313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.166327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.166334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.166341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.166357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.176386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.176453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.176468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.176476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.176483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.176497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.186405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.186461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.076 [2024-11-19 17:45:07.186476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.076 [2024-11-19 17:45:07.186483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.076 [2024-11-19 17:45:07.186489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.076 [2024-11-19 17:45:07.186505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.076 qpair failed and we were unable to recover it. 00:27:05.076 [2024-11-19 17:45:07.196431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.076 [2024-11-19 17:45:07.196484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.196498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.196505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.196515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.196530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.206436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.206488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.206503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.206510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.206516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.206531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.216526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.216589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.216604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.216611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.216617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.216632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.226519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.226578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.226594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.226601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.226607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.226623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.236501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.236583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.236598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.236605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.236611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.236626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.246539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.246598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.246612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.246619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.246626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.246641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.256543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.256601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.256617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.256624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.256630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.256645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.266657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.266756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.266770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.266777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.266783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.266799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.276706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.276779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.276794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.276801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.276808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.276822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.077 [2024-11-19 17:45:07.286712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.077 [2024-11-19 17:45:07.286775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.077 [2024-11-19 17:45:07.286793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.077 [2024-11-19 17:45:07.286800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.077 [2024-11-19 17:45:07.286807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.077 [2024-11-19 17:45:07.286822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.077 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-19 17:45:07.296721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.338 [2024-11-19 17:45:07.296778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.338 [2024-11-19 17:45:07.296792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.338 [2024-11-19 17:45:07.296799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.338 [2024-11-19 17:45:07.296806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.338 [2024-11-19 17:45:07.296820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-19 17:45:07.306742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.338 [2024-11-19 17:45:07.306795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.338 [2024-11-19 17:45:07.306809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.338 [2024-11-19 17:45:07.306817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.338 [2024-11-19 17:45:07.306823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.338 [2024-11-19 17:45:07.306837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-19 17:45:07.316758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.338 [2024-11-19 17:45:07.316812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.338 [2024-11-19 17:45:07.316826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.338 [2024-11-19 17:45:07.316833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.316840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.316855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.326797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.326880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.326896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.326907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.326914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.326929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.336824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.336880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.336894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.336902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.336908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.336924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.346847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.346956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.346971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.346977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.346984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.347000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.356884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.356940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.356960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.356967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.356974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.356990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.366925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.367000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.367015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.367022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.367029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.367043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.376913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.376976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.376990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.376997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.377003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.377019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.386967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.387022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.387036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.387044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.387050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.387065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.396989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.397044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.397059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.397066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.397072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.397087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.407024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.407080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.407095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.407102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.407111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.407126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.417084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.417145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.417163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.417171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.417177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.417191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.427120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.427182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.427196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.427203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.427210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.427225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.437091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.437156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.437170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.437178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.437184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.339 [2024-11-19 17:45:07.437199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-19 17:45:07.447066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.339 [2024-11-19 17:45:07.447122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.339 [2024-11-19 17:45:07.447136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.339 [2024-11-19 17:45:07.447143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.339 [2024-11-19 17:45:07.447150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.447165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.457226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.457299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.457312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.457323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.457330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.457346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.467203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.467260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.467275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.467283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.467289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.467305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.477253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.477306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.477321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.477327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.477334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.477349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.487185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.487239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.487253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.487260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.487266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.487281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.497203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.497261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.497275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.497282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.497288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.497303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.507276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.507339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.507353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.507361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.507367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.507382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.517284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.517374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.517391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.517399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.517405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.517420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.527301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.527357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.527372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.527379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.527385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.527401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.537371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.537425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.537439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.537447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.537453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.537468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-19 17:45:07.547404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.340 [2024-11-19 17:45:07.547460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.340 [2024-11-19 17:45:07.547477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.340 [2024-11-19 17:45:07.547486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.340 [2024-11-19 17:45:07.547493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.340 [2024-11-19 17:45:07.547508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.557395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.557446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.557461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.557468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.557474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.557490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.567475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.567544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.567559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.567566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.567572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.567587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.577478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.577535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.577549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.577556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.577562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.577577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.587461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.587517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.587531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.587542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.587549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.587563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.597532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.597588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.597602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.597609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.597616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.597631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.607498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.607555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.607569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.607577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.607583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.607598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.617633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.617707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.617721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.617729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.617735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.601 [2024-11-19 17:45:07.617751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-19 17:45:07.627563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.601 [2024-11-19 17:45:07.627622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.601 [2024-11-19 17:45:07.627637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.601 [2024-11-19 17:45:07.627644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.601 [2024-11-19 17:45:07.627651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.627666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.637577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.637641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.637658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.637665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.637672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.637687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.647691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.647745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.647759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.647766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.647773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.647788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.657736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.657791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.657806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.657812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.657819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.657834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.667742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.667799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.667814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.667820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.667827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.667842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.677737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.677798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.677813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.677821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.677828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.677843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.687839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.687892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.687906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.687913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.687920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.687935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.697805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.697861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.697875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.697882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.697888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.697903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.707902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.707969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.707984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.707992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.707998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.708012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.717892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.717957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.717974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.717986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.717994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.718009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.727915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.727978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.727993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.728000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.728006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.728022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.737957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.738012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.738027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.738034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.738041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.738056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.747966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.748019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.748034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.748041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.748048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.602 [2024-11-19 17:45:07.748062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-19 17:45:07.758004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.602 [2024-11-19 17:45:07.758121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.602 [2024-11-19 17:45:07.758137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.602 [2024-11-19 17:45:07.758144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.602 [2024-11-19 17:45:07.758150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.758168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-19 17:45:07.768031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.603 [2024-11-19 17:45:07.768085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.603 [2024-11-19 17:45:07.768100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.603 [2024-11-19 17:45:07.768107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.603 [2024-11-19 17:45:07.768115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.768130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-19 17:45:07.777995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.603 [2024-11-19 17:45:07.778065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.603 [2024-11-19 17:45:07.778080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.603 [2024-11-19 17:45:07.778087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.603 [2024-11-19 17:45:07.778094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.778109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-19 17:45:07.788083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.603 [2024-11-19 17:45:07.788150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.603 [2024-11-19 17:45:07.788165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.603 [2024-11-19 17:45:07.788172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.603 [2024-11-19 17:45:07.788179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.788193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-19 17:45:07.798115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.603 [2024-11-19 17:45:07.798169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.603 [2024-11-19 17:45:07.798182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.603 [2024-11-19 17:45:07.798189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.603 [2024-11-19 17:45:07.798196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.798211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-19 17:45:07.808156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.603 [2024-11-19 17:45:07.808226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.603 [2024-11-19 17:45:07.808240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.603 [2024-11-19 17:45:07.808247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.603 [2024-11-19 17:45:07.808253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.808268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-19 17:45:07.818158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.603 [2024-11-19 17:45:07.818244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.603 [2024-11-19 17:45:07.818259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.603 [2024-11-19 17:45:07.818266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.603 [2024-11-19 17:45:07.818272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.603 [2024-11-19 17:45:07.818287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.864 [2024-11-19 17:45:07.828194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.864 [2024-11-19 17:45:07.828251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.864 [2024-11-19 17:45:07.828267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.864 [2024-11-19 17:45:07.828274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.864 [2024-11-19 17:45:07.828282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.864 [2024-11-19 17:45:07.828297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.864 qpair failed and we were unable to recover it. 00:27:05.864 [2024-11-19 17:45:07.838237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.864 [2024-11-19 17:45:07.838292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.864 [2024-11-19 17:45:07.838307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.864 [2024-11-19 17:45:07.838315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.864 [2024-11-19 17:45:07.838321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.864 [2024-11-19 17:45:07.838336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.864 qpair failed and we were unable to recover it. 00:27:05.864 [2024-11-19 17:45:07.848278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.864 [2024-11-19 17:45:07.848329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.864 [2024-11-19 17:45:07.848343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.864 [2024-11-19 17:45:07.848354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.864 [2024-11-19 17:45:07.848360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.848375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.858282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.858341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.858356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.858363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.858371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.858386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.868256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.868314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.868329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.868337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.868343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.868357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.878360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.878415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.878431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.878439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.878446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.878461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.888395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.888447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.888461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.888469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.888475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.888490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.898436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.898503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.898520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.898527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.898534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.898550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.908453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.908509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.908526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.908534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.908541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.908557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.918469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.918522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.918536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.918544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.918550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.918565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.928535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.928597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.928611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.928618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.928625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.928640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.938544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.938604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.938618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.938626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.938632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.938647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.948574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.948630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.948643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.948651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.948657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.948671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.958602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.958662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.958677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.958684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.958691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.958706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.968614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.968669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.968683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.968691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.968698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.968713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.865 qpair failed and we were unable to recover it. 00:27:05.865 [2024-11-19 17:45:07.978680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.865 [2024-11-19 17:45:07.978740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.865 [2024-11-19 17:45:07.978753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.865 [2024-11-19 17:45:07.978764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.865 [2024-11-19 17:45:07.978770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.865 [2024-11-19 17:45:07.978785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:07.988732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:07.988791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:07.988806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:07.988813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:07.988820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:07.988835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:07.998704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:07.998761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:07.998775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:07.998782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:07.998790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:07.998806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.008726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.008776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.008791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.008798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.008805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.008820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.018765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.018821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.018836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.018843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.018850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.018868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.028741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.028797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.028812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.028819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.028826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.028841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.038811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.038864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.038878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.038885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.038892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.038906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.048862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.048925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.048939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.048951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.048958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.048973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.058881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.058952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.058968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.058976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.058984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.058999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.068889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.068952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.068966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.068974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.068981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.068995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:05.866 [2024-11-19 17:45:08.078919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.866 [2024-11-19 17:45:08.078980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.866 [2024-11-19 17:45:08.078994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.866 [2024-11-19 17:45:08.079002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.866 [2024-11-19 17:45:08.079008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:05.866 [2024-11-19 17:45:08.079023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.866 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.088937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.088992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.089008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.089015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.089022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.089039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.098985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.099044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.099059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.099066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.099073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.099087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.109002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.109055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.109069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.109080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.109087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.109102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.119058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.119118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.119133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.119141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.119147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.119162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.129067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.129127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.129141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.129149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.129155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.129170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.139107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.139187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.139202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.139209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.139215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.139230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.149183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.149284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.149299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.149306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.149313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.149332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.159144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.159197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.159211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.159217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.159224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.159239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.169215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.169271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.169286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.169294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.169300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.169315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.179252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.179353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.179367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.127 [2024-11-19 17:45:08.179373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.127 [2024-11-19 17:45:08.179380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.127 [2024-11-19 17:45:08.179395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.127 qpair failed and we were unable to recover it. 00:27:06.127 [2024-11-19 17:45:08.189268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.127 [2024-11-19 17:45:08.189370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.127 [2024-11-19 17:45:08.189383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.189391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.189398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.128 [2024-11-19 17:45:08.189414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.199245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.199302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.199317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.199324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.199331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.128 [2024-11-19 17:45:08.199346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.209306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.209368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.209382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.209390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.209396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.128 [2024-11-19 17:45:08.209411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.219319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.219376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.219389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.219396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.219403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a9ba0 00:27:06.128 [2024-11-19 17:45:08.219417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.229369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.229473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.229531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.229557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.229580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4104000b90 00:27:06.128 [2024-11-19 17:45:08.229635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.239348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.239419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.239446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.239466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.239479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4104000b90 00:27:06.128 [2024-11-19 17:45:08.239508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.249450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.249547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.249604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.249630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.249654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4100000b90 00:27:06.128 [2024-11-19 17:45:08.249705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.259423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.128 [2024-11-19 17:45:08.259507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.128 [2024-11-19 17:45:08.259534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.128 [2024-11-19 17:45:08.259548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.128 [2024-11-19 17:45:08.259562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4100000b90 00:27:06.128 [2024-11-19 17:45:08.259594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:06.128 qpair failed and we were unable to recover it. 00:27:06.128 [2024-11-19 17:45:08.259695] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:06.128 A controller has encountered a failure and is being reset. 00:27:06.128 Controller properly reset. 00:27:06.128 Initializing NVMe Controllers 00:27:06.128 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:06.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:06.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:06.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:06.128 Initialization complete. Launching workers. 00:27:06.128 Starting thread on core 1 00:27:06.128 Starting thread on core 2 00:27:06.128 Starting thread on core 3 00:27:06.128 Starting thread on core 0 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:06.128 00:27:06.128 real 0m10.823s 00:27:06.128 user 0m19.212s 00:27:06.128 sys 0m4.616s 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.128 ************************************ 00:27:06.128 END TEST nvmf_target_disconnect_tc2 00:27:06.128 ************************************ 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:06.128 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:06.128 rmmod nvme_tcp 00:27:06.387 rmmod nvme_fabrics 00:27:06.387 rmmod nvme_keyring 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3615938 ']' 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3615938 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3615938 ']' 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3615938 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3615938 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3615938' 00:27:06.387 killing process with pid 3615938 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3615938 00:27:06.387 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3615938 00:27:06.646 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:06.646 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:06.646 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:06.646 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:06.646 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:06.646 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:06.647 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:06.647 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:06.647 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:06.647 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.647 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.647 17:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.552 17:45:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.552 00:27:08.552 real 0m19.604s 00:27:08.552 user 0m47.086s 00:27:08.553 sys 0m9.543s 00:27:08.553 17:45:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.553 17:45:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:08.553 ************************************ 00:27:08.553 END TEST nvmf_target_disconnect 00:27:08.553 ************************************ 00:27:08.553 17:45:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:08.553 00:27:08.553 real 5m52.065s 00:27:08.553 user 10m32.992s 00:27:08.553 sys 1m58.420s 00:27:08.553 17:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.553 17:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.553 ************************************ 00:27:08.553 END TEST nvmf_host 00:27:08.553 ************************************ 00:27:08.813 17:45:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:08.813 17:45:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:08.813 17:45:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:08.813 17:45:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:08.813 17:45:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.813 17:45:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.813 ************************************ 00:27:08.813 START TEST nvmf_target_core_interrupt_mode 00:27:08.813 ************************************ 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:08.813 * Looking for test storage... 00:27:08.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.813 --rc genhtml_branch_coverage=1 00:27:08.813 --rc genhtml_function_coverage=1 00:27:08.813 --rc genhtml_legend=1 00:27:08.813 --rc geninfo_all_blocks=1 00:27:08.813 --rc geninfo_unexecuted_blocks=1 00:27:08.813 00:27:08.813 ' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.813 --rc genhtml_branch_coverage=1 00:27:08.813 --rc genhtml_function_coverage=1 00:27:08.813 --rc genhtml_legend=1 00:27:08.813 --rc geninfo_all_blocks=1 00:27:08.813 --rc geninfo_unexecuted_blocks=1 00:27:08.813 00:27:08.813 ' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.813 --rc genhtml_branch_coverage=1 00:27:08.813 --rc genhtml_function_coverage=1 00:27:08.813 --rc genhtml_legend=1 00:27:08.813 --rc geninfo_all_blocks=1 00:27:08.813 --rc geninfo_unexecuted_blocks=1 00:27:08.813 00:27:08.813 ' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.813 --rc genhtml_branch_coverage=1 00:27:08.813 --rc genhtml_function_coverage=1 00:27:08.813 --rc genhtml_legend=1 00:27:08.813 --rc geninfo_all_blocks=1 00:27:08.813 --rc geninfo_unexecuted_blocks=1 00:27:08.813 00:27:08.813 ' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.813 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.814 17:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.814 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:09.074 ************************************ 00:27:09.074 START TEST nvmf_abort 00:27:09.074 ************************************ 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:09.074 * Looking for test storage... 00:27:09.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.074 --rc genhtml_branch_coverage=1 00:27:09.074 --rc genhtml_function_coverage=1 00:27:09.074 --rc genhtml_legend=1 00:27:09.074 --rc geninfo_all_blocks=1 00:27:09.074 --rc geninfo_unexecuted_blocks=1 00:27:09.074 00:27:09.074 ' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.074 --rc genhtml_branch_coverage=1 00:27:09.074 --rc genhtml_function_coverage=1 00:27:09.074 --rc genhtml_legend=1 00:27:09.074 --rc geninfo_all_blocks=1 00:27:09.074 --rc geninfo_unexecuted_blocks=1 00:27:09.074 00:27:09.074 ' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.074 --rc genhtml_branch_coverage=1 00:27:09.074 --rc genhtml_function_coverage=1 00:27:09.074 --rc genhtml_legend=1 00:27:09.074 --rc geninfo_all_blocks=1 00:27:09.074 --rc geninfo_unexecuted_blocks=1 00:27:09.074 00:27:09.074 ' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.074 --rc genhtml_branch_coverage=1 00:27:09.074 --rc genhtml_function_coverage=1 00:27:09.074 --rc genhtml_legend=1 00:27:09.074 --rc geninfo_all_blocks=1 00:27:09.074 --rc geninfo_unexecuted_blocks=1 00:27:09.074 00:27:09.074 ' 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:09.074 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.075 17:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:15.647 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:15.647 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.647 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:15.648 Found net devices under 0000:86:00.0: cvl_0_0 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:15.648 Found net devices under 0000:86:00.1: cvl_0_1 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.648 17:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:15.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:27:15.648 00:27:15.648 --- 10.0.0.2 ping statistics --- 00:27:15.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.648 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:27:15.648 00:27:15.648 --- 10.0.0.1 ping statistics --- 00:27:15.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.648 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3620872 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3620872 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3620872 ']' 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.648 [2024-11-19 17:45:17.255545] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:15.648 [2024-11-19 17:45:17.256460] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:27:15.648 [2024-11-19 17:45:17.256495] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.648 [2024-11-19 17:45:17.334776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:15.648 [2024-11-19 17:45:17.376859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.648 [2024-11-19 17:45:17.376893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.648 [2024-11-19 17:45:17.376900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.648 [2024-11-19 17:45:17.376906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.648 [2024-11-19 17:45:17.376912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.648 [2024-11-19 17:45:17.378338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.648 [2024-11-19 17:45:17.378448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.648 [2024-11-19 17:45:17.378449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.648 [2024-11-19 17:45:17.445192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:15.648 [2024-11-19 17:45:17.446038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:15.648 [2024-11-19 17:45:17.446505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:15.648 [2024-11-19 17:45:17.446561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:15.648 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 [2024-11-19 17:45:17.511291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 Malloc0 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 Delay0 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 [2024-11-19 17:45:17.607165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.649 17:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:15.649 [2024-11-19 17:45:17.695257] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:18.183 Initializing NVMe Controllers 00:27:18.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:18.183 controller IO queue size 128 less than required 00:27:18.183 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:18.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:18.183 Initialization complete. Launching workers. 00:27:18.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36954 00:27:18.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37011, failed to submit 66 00:27:18.183 success 36954, unsuccessful 57, failed 0 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.183 rmmod nvme_tcp 00:27:18.183 rmmod nvme_fabrics 00:27:18.183 rmmod nvme_keyring 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3620872 ']' 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3620872 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3620872 ']' 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3620872 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3620872 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3620872' 00:27:18.183 killing process with pid 3620872 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3620872 00:27:18.183 17:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3620872 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.183 17:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.089 00:27:20.089 real 0m11.131s 00:27:20.089 user 0m10.335s 00:27:20.089 sys 0m5.695s 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.089 ************************************ 00:27:20.089 END TEST nvmf_abort 00:27:20.089 ************************************ 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:20.089 ************************************ 00:27:20.089 START TEST nvmf_ns_hotplug_stress 00:27:20.089 ************************************ 00:27:20.089 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:20.349 * Looking for test storage... 00:27:20.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:20.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.349 --rc genhtml_branch_coverage=1 00:27:20.349 --rc genhtml_function_coverage=1 00:27:20.349 --rc genhtml_legend=1 00:27:20.349 --rc geninfo_all_blocks=1 00:27:20.349 --rc geninfo_unexecuted_blocks=1 00:27:20.349 00:27:20.349 ' 00:27:20.349 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:20.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.349 --rc genhtml_branch_coverage=1 00:27:20.349 --rc genhtml_function_coverage=1 00:27:20.349 --rc genhtml_legend=1 00:27:20.349 --rc geninfo_all_blocks=1 00:27:20.349 --rc geninfo_unexecuted_blocks=1 00:27:20.349 00:27:20.349 ' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:20.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.350 --rc genhtml_branch_coverage=1 00:27:20.350 --rc genhtml_function_coverage=1 00:27:20.350 --rc genhtml_legend=1 00:27:20.350 --rc geninfo_all_blocks=1 00:27:20.350 --rc geninfo_unexecuted_blocks=1 00:27:20.350 00:27:20.350 ' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:20.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.350 --rc genhtml_branch_coverage=1 00:27:20.350 --rc genhtml_function_coverage=1 00:27:20.350 --rc genhtml_legend=1 00:27:20.350 --rc geninfo_all_blocks=1 00:27:20.350 --rc geninfo_unexecuted_blocks=1 00:27:20.350 00:27:20.350 ' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.350 17:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:26.924 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:26.925 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:26.925 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:26.925 Found net devices under 0000:86:00.0: cvl_0_0 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:26.925 Found net devices under 0000:86:00.1: cvl_0_1 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:27:26.925 00:27:26.925 --- 10.0.0.2 ping statistics --- 00:27:26.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.925 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:27:26.925 00:27:26.925 --- 10.0.0.1 ping statistics --- 00:27:26.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.925 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:26.925 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3624867 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3624867 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3624867 ']' 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.926 17:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:26.926 [2024-11-19 17:45:28.393343] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:26.926 [2024-11-19 17:45:28.394259] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:27:26.926 [2024-11-19 17:45:28.394292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.926 [2024-11-19 17:45:28.475715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.926 [2024-11-19 17:45:28.517574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.926 [2024-11-19 17:45:28.517609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.926 [2024-11-19 17:45:28.517616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.926 [2024-11-19 17:45:28.517622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.926 [2024-11-19 17:45:28.517628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.926 [2024-11-19 17:45:28.519076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.926 [2024-11-19 17:45:28.519185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.926 [2024-11-19 17:45:28.519186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.926 [2024-11-19 17:45:28.587182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:26.926 [2024-11-19 17:45:28.587976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:26.926 [2024-11-19 17:45:28.588367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:26.926 [2024-11-19 17:45:28.588469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:27.185 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:27.444 [2024-11-19 17:45:29.459923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.444 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:27.703 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.703 [2024-11-19 17:45:29.840398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.703 17:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:27.962 17:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:28.221 Malloc0 00:27:28.221 17:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:28.480 Delay0 00:27:28.480 17:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.480 17:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:28.740 NULL1 00:27:28.740 17:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:29.000 17:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3625353 00:27:29.000 17:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:29.000 17:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:29.000 17:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.379 Read completed with error (sct=0, sc=11) 00:27:30.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.379 17:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.379 17:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:30.379 17:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:30.639 true 00:27:30.639 17:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:30.639 17:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.576 17:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.576 17:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:31.576 17:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:31.836 true 00:27:31.836 17:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:31.836 17:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.096 17:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.354 17:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:32.354 17:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:32.354 true 00:27:32.354 17:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:32.354 17:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.732 17:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.732 17:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:33.732 17:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:33.732 true 00:27:33.991 17:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:33.991 17:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.991 17:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.249 17:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:34.249 17:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:34.508 true 00:27:34.508 17:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:34.508 17:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.705 17:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.705 17:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:35.705 17:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:35.964 true 00:27:35.964 17:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:35.965 17:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.901 17:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.159 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:37.160 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:37.160 true 00:27:37.160 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:37.160 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.418 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.677 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:37.677 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:37.937 true 00:27:37.937 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:37.937 17:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.874 17:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.133 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:39.133 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:39.133 true 00:27:39.133 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:39.133 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.392 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.651 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:39.651 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:39.909 true 00:27:39.909 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:39.909 17:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.845 17:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.104 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:41.104 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:41.362 true 00:27:41.362 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:41.362 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.362 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.622 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:41.622 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:41.881 true 00:27:41.881 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:41.881 17:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.818 17:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.078 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:43.078 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:43.337 true 00:27:43.337 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:43.337 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.596 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.596 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:43.596 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:43.855 true 00:27:43.855 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:43.855 17:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 17:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.233 17:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:45.233 17:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:45.492 true 00:27:45.492 17:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:45.492 17:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.429 17:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.429 17:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:46.429 17:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:46.688 true 00:27:46.688 17:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:46.688 17:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.947 17:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.205 17:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:47.205 17:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:47.205 true 00:27:47.205 17:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:47.205 17:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.583 17:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.583 17:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:48.583 17:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:48.841 true 00:27:48.841 17:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:48.841 17:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.101 17:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.360 17:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:49.360 17:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:49.360 true 00:27:49.619 17:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:49.619 17:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.556 17:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.815 17:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:50.815 17:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:51.074 true 00:27:51.074 17:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:51.074 17:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.902 17:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.902 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:51.902 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:52.161 true 00:27:52.161 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:52.161 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.419 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.678 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:52.678 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:52.678 true 00:27:52.678 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:52.678 17:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.056 17:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.057 17:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:54.057 17:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:54.315 true 00:27:54.315 17:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:54.315 17:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.253 17:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.253 17:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:55.253 17:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:55.513 true 00:27:55.513 17:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:55.513 17:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.772 17:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.032 17:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:56.032 17:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:56.291 true 00:27:56.291 17:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:56.291 17:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.228 17:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.486 17:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:57.486 17:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:57.751 true 00:27:57.751 17:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:57.751 17:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.470 17:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.728 17:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:58.728 17:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:58.987 true 00:27:58.987 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:58.987 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.245 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.245 Initializing NVMe Controllers 00:27:59.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.245 Controller IO queue size 128, less than required. 00:27:59.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.245 Controller IO queue size 128, less than required. 00:27:59.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.245 Initialization complete. Launching workers. 00:27:59.245 ======================================================== 00:27:59.245 Latency(us) 00:27:59.245 Device Information : IOPS MiB/s Average min max 00:27:59.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1685.05 0.82 49279.53 2635.73 1083493.77 00:27:59.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16732.26 8.17 7650.06 1598.17 382481.90 00:27:59.245 ======================================================== 00:27:59.245 Total : 18417.31 8.99 11458.85 1598.17 1083493.77 00:27:59.245 00:27:59.245 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:59.245 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:59.504 true 00:27:59.504 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625353 00:27:59.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3625353) - No such process 00:27:59.504 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3625353 00:27:59.504 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.763 17:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:00.022 null0 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:00.022 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:00.281 null1 00:28:00.281 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:00.281 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:00.281 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:00.540 null2 00:28:00.540 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:00.540 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:00.540 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:00.798 null3 00:28:00.798 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:00.799 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:00.799 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:00.799 null4 00:28:00.799 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:00.799 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:00.799 17:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:01.057 null5 00:28:01.057 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:01.057 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:01.057 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:01.317 null6 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:01.317 null7 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.317 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3630470 3630472 3630473 3630475 3630477 3630479 3630481 3630483 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.318 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:01.577 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:01.578 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:01.837 17:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:02.097 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:02.357 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:02.617 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:02.618 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:02.877 17:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.137 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.397 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.658 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:03.918 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.918 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:03.919 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:04.178 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.437 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:04.438 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:04.697 17:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.956 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.216 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.475 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.735 rmmod nvme_tcp 00:28:05.735 rmmod nvme_fabrics 00:28:05.735 rmmod nvme_keyring 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3624867 ']' 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3624867 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3624867 ']' 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3624867 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3624867 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3624867' 00:28:05.735 killing process with pid 3624867 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3624867 00:28:05.735 17:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3624867 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.994 17:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.899 00:28:07.899 real 0m47.828s 00:28:07.899 user 2m57.703s 00:28:07.899 sys 0m20.288s 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:07.899 ************************************ 00:28:07.899 END TEST nvmf_ns_hotplug_stress 00:28:07.899 ************************************ 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.899 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:08.160 ************************************ 00:28:08.160 START TEST nvmf_delete_subsystem 00:28:08.160 ************************************ 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:08.160 * Looking for test storage... 00:28:08.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.160 --rc genhtml_branch_coverage=1 00:28:08.160 --rc genhtml_function_coverage=1 00:28:08.160 --rc genhtml_legend=1 00:28:08.160 --rc geninfo_all_blocks=1 00:28:08.160 --rc geninfo_unexecuted_blocks=1 00:28:08.160 00:28:08.160 ' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.160 --rc genhtml_branch_coverage=1 00:28:08.160 --rc genhtml_function_coverage=1 00:28:08.160 --rc genhtml_legend=1 00:28:08.160 --rc geninfo_all_blocks=1 00:28:08.160 --rc geninfo_unexecuted_blocks=1 00:28:08.160 00:28:08.160 ' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.160 --rc genhtml_branch_coverage=1 00:28:08.160 --rc genhtml_function_coverage=1 00:28:08.160 --rc genhtml_legend=1 00:28:08.160 --rc geninfo_all_blocks=1 00:28:08.160 --rc geninfo_unexecuted_blocks=1 00:28:08.160 00:28:08.160 ' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:08.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.160 --rc genhtml_branch_coverage=1 00:28:08.160 --rc genhtml_function_coverage=1 00:28:08.160 --rc genhtml_legend=1 00:28:08.160 --rc geninfo_all_blocks=1 00:28:08.160 --rc geninfo_unexecuted_blocks=1 00:28:08.160 00:28:08.160 ' 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.160 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.161 17:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.733 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:14.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:14.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:14.734 Found net devices under 0000:86:00.0: cvl_0_0 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:14.734 Found net devices under 0000:86:00.1: cvl_0_1 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.734 17:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:28:14.734 00:28:14.734 --- 10.0.0.2 ping statistics --- 00:28:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.734 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:28:14.734 00:28:14.734 --- 10.0.0.1 ping statistics --- 00:28:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.734 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.734 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3634847 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3634847 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3634847 ']' 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.735 17:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:14.735 [2024-11-19 17:46:16.307399] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:14.735 [2024-11-19 17:46:16.308330] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:28:14.735 [2024-11-19 17:46:16.308362] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.735 [2024-11-19 17:46:16.389068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:14.735 [2024-11-19 17:46:16.431629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.735 [2024-11-19 17:46:16.431667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.735 [2024-11-19 17:46:16.431674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.735 [2024-11-19 17:46:16.431680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.735 [2024-11-19 17:46:16.431686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.735 [2024-11-19 17:46:16.432829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.735 [2024-11-19 17:46:16.432832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.735 [2024-11-19 17:46:16.501194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:14.735 [2024-11-19 17:46:16.501782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:14.735 [2024-11-19 17:46:16.501902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:14.994 [2024-11-19 17:46:17.189647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.994 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:14.995 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.995 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:14.995 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.995 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.995 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.995 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:15.254 [2024-11-19 17:46:17.217946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:15.254 NULL1 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:15.254 Delay0 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3635090 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:15.254 17:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:15.254 [2024-11-19 17:46:17.334435] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:17.157 17:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.157 17:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.157 17:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 starting I/O failed: -6 00:28:17.416 [2024-11-19 17:46:19.450051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c860 is same with the state(6) to be set 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.416 Write completed with error (sct=0, sc=8) 00:28:17.416 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 starting I/O failed: -6 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 [2024-11-19 17:46:19.453256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe6b4000c40 is same with the state(6) to be set 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Read completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:17.417 Write completed with error (sct=0, sc=8) 00:28:18.353 [2024-11-19 17:46:20.428915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7d9a0 is same with the state(6) to be set 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 [2024-11-19 17:46:20.453197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c2c0 is same with the state(6) to be set 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 [2024-11-19 17:46:20.453661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c680 is same with the state(6) to be set 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 [2024-11-19 17:46:20.454775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe6b400d800 is same with the state(6) to be set 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Read completed with error (sct=0, sc=8) 00:28:18.353 Write completed with error (sct=0, sc=8) 00:28:18.353 [2024-11-19 17:46:20.455965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe6b400d020 is same with the state(6) to be set 00:28:18.353 Initializing NVMe Controllers 00:28:18.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.353 Controller IO queue size 128, less than required. 00:28:18.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:18.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:18.353 Initialization complete. Launching workers. 00:28:18.353 ======================================================== 00:28:18.353 Latency(us) 00:28:18.353 Device Information : IOPS MiB/s Average min max 00:28:18.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.13 0.08 893115.85 311.49 1043523.08 00:28:18.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.68 0.08 912021.96 266.56 1011672.46 00:28:18.353 ======================================================== 00:28:18.353 Total : 332.81 0.16 902300.43 266.56 1043523.08 00:28:18.353 00:28:18.353 [2024-11-19 17:46:20.456617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7d9a0 (9): Bad file descriptor 00:28:18.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:18.353 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.353 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:18.353 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3635090 00:28:18.353 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3635090 00:28:18.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3635090) - No such process 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3635090 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3635090 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.921 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3635090 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.922 [2024-11-19 17:46:20.981830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3635607 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:18.922 17:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:18.922 [2024-11-19 17:46:21.066112] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:19.489 17:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:19.489 17:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:19.489 17:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:20.056 17:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:20.056 17:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:20.056 17:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:20.314 17:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:20.314 17:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:20.314 17:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:20.887 17:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:20.887 17:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:20.887 17:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:21.455 17:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:21.455 17:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:21.455 17:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:22.027 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:22.027 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:22.027 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:22.027 Initializing NVMe Controllers 00:28:22.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.027 Controller IO queue size 128, less than required. 00:28:22.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:22.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:22.027 Initialization complete. Launching workers. 00:28:22.027 ======================================================== 00:28:22.027 Latency(us) 00:28:22.027 Device Information : IOPS MiB/s Average min max 00:28:22.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002101.36 1000146.52 1006206.99 00:28:22.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004114.61 1000226.37 1010373.71 00:28:22.027 ======================================================== 00:28:22.027 Total : 256.00 0.12 1003107.99 1000146.52 1010373.71 00:28:22.027 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3635607 00:28:22.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3635607) - No such process 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3635607 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.596 rmmod nvme_tcp 00:28:22.596 rmmod nvme_fabrics 00:28:22.596 rmmod nvme_keyring 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3634847 ']' 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3634847 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3634847 ']' 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3634847 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:22.596 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3634847 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3634847' 00:28:22.597 killing process with pid 3634847 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3634847 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3634847 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.597 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:22.855 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:22.855 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.856 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.856 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.856 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.856 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.856 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.856 17:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.762 00:28:24.762 real 0m16.745s 00:28:24.762 user 0m26.081s 00:28:24.762 sys 0m6.236s 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.762 ************************************ 00:28:24.762 END TEST nvmf_delete_subsystem 00:28:24.762 ************************************ 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:24.762 ************************************ 00:28:24.762 START TEST nvmf_host_management 00:28:24.762 ************************************ 00:28:24.762 17:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:25.023 * Looking for test storage... 00:28:25.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:25.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.023 --rc genhtml_branch_coverage=1 00:28:25.023 --rc genhtml_function_coverage=1 00:28:25.023 --rc genhtml_legend=1 00:28:25.023 --rc geninfo_all_blocks=1 00:28:25.023 --rc geninfo_unexecuted_blocks=1 00:28:25.023 00:28:25.023 ' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:25.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.023 --rc genhtml_branch_coverage=1 00:28:25.023 --rc genhtml_function_coverage=1 00:28:25.023 --rc genhtml_legend=1 00:28:25.023 --rc geninfo_all_blocks=1 00:28:25.023 --rc geninfo_unexecuted_blocks=1 00:28:25.023 00:28:25.023 ' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:25.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.023 --rc genhtml_branch_coverage=1 00:28:25.023 --rc genhtml_function_coverage=1 00:28:25.023 --rc genhtml_legend=1 00:28:25.023 --rc geninfo_all_blocks=1 00:28:25.023 --rc geninfo_unexecuted_blocks=1 00:28:25.023 00:28:25.023 ' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:25.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.023 --rc genhtml_branch_coverage=1 00:28:25.023 --rc genhtml_function_coverage=1 00:28:25.023 --rc genhtml_legend=1 00:28:25.023 --rc geninfo_all_blocks=1 00:28:25.023 --rc geninfo_unexecuted_blocks=1 00:28:25.023 00:28:25.023 ' 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:25.023 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.024 17:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:31.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:31.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.682 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:31.683 Found net devices under 0000:86:00.0: cvl_0_0 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:31.683 Found net devices under 0000:86:00.1: cvl_0_1 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:28:31.683 00:28:31.683 --- 10.0.0.2 ping statistics --- 00:28:31.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.683 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:28:31.683 17:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:28:31.683 00:28:31.683 --- 10.0.0.1 ping statistics --- 00:28:31.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.683 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3639769 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3639769 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3639769 ']' 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.683 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.683 [2024-11-19 17:46:33.101472] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:31.683 [2024-11-19 17:46:33.102512] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:28:31.683 [2024-11-19 17:46:33.102551] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.683 [2024-11-19 17:46:33.181768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.684 [2024-11-19 17:46:33.225990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.684 [2024-11-19 17:46:33.226029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.684 [2024-11-19 17:46:33.226036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.684 [2024-11-19 17:46:33.226043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.684 [2024-11-19 17:46:33.226048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.684 [2024-11-19 17:46:33.227708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.684 [2024-11-19 17:46:33.227817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.684 [2024-11-19 17:46:33.227924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.684 [2024-11-19 17:46:33.227925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:31.684 [2024-11-19 17:46:33.296158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:31.684 [2024-11-19 17:46:33.296737] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:31.684 [2024-11-19 17:46:33.297114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:31.684 [2024-11-19 17:46:33.297468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:31.684 [2024-11-19 17:46:33.297508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 [2024-11-19 17:46:33.364600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 Malloc0 00:28:31.684 [2024-11-19 17:46:33.448808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3639814 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3639814 /var/tmp/bdevperf.sock 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3639814 ']' 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:31.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.684 { 00:28:31.684 "params": { 00:28:31.684 "name": "Nvme$subsystem", 00:28:31.684 "trtype": "$TEST_TRANSPORT", 00:28:31.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.684 "adrfam": "ipv4", 00:28:31.684 "trsvcid": "$NVMF_PORT", 00:28:31.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.684 "hdgst": ${hdgst:-false}, 00:28:31.684 "ddgst": ${ddgst:-false} 00:28:31.684 }, 00:28:31.684 "method": "bdev_nvme_attach_controller" 00:28:31.684 } 00:28:31.684 EOF 00:28:31.684 )") 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:31.684 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:31.684 "params": { 00:28:31.684 "name": "Nvme0", 00:28:31.684 "trtype": "tcp", 00:28:31.684 "traddr": "10.0.0.2", 00:28:31.684 "adrfam": "ipv4", 00:28:31.684 "trsvcid": "4420", 00:28:31.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.684 "hdgst": false, 00:28:31.684 "ddgst": false 00:28:31.684 }, 00:28:31.684 "method": "bdev_nvme_attach_controller" 00:28:31.684 }' 00:28:31.684 [2024-11-19 17:46:33.542381] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:28:31.684 [2024-11-19 17:46:33.542430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639814 ] 00:28:31.684 [2024-11-19 17:46:33.616743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.684 [2024-11-19 17:46:33.657940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.944 Running I/O for 10 seconds... 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.944 17:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:31.944 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.944 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:31.944 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:31.944 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.207 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:32.207 [2024-11-19 17:46:34.324348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.207 [2024-11-19 17:46:34.324453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 [2024-11-19 17:46:34.324618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1ec0 is same with the state(6) to be set 00:28:32.208 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.208 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:32.208 [2024-11-19 17:46:34.329789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.329986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.329992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.330000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.330007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.330017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.330024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.330032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.330039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.330048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.330054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.330064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.330070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.208 [2024-11-19 17:46:34.330078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.208 [2024-11-19 17:46:34.330087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.208 [2024-11-19 17:46:34.330095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:32.209 [2024-11-19 17:46:34.330366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.209 [2024-11-19 17:46:34.330578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.209 [2024-11-19 17:46:34.330584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.210 [2024-11-19 17:46:34.330789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.330816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.210 [2024-11-19 17:46:34.331756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:32.210 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:32.210 00:28:32.210 Latency(us) 00:28:32.210 [2024-11-19T16:46:34.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.210 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:32.210 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:32.210 Verification LBA range: start 0x0 length 0x400 00:28:32.210 Nvme0n1 : 0.40 1929.23 120.58 160.77 0.00 29776.16 1595.66 27582.11 00:28:32.210 [2024-11-19T16:46:34.433Z] =================================================================================================================== 00:28:32.210 [2024-11-19T16:46:34.433Z] Total : 1929.23 120.58 160.77 0.00 29776.16 1595.66 27582.11 00:28:32.210 [2024-11-19 17:46:34.334168] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:32.210 [2024-11-19 17:46:34.334188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78500 (9): Bad file descriptor 00:28:32.210 [2024-11-19 17:46:34.335184] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:32.210 [2024-11-19 17:46:34.335260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:32.210 [2024-11-19 17:46:34.335281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.210 [2024-11-19 17:46:34.335296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:32.210 [2024-11-19 17:46:34.335304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:32.210 [2024-11-19 17:46:34.335311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.210 [2024-11-19 17:46:34.335319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc78500 00:28:32.210 [2024-11-19 17:46:34.335337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78500 (9): Bad file descriptor 00:28:32.210 [2024-11-19 17:46:34.335349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:32.210 [2024-11-19 17:46:34.335356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:32.210 [2024-11-19 17:46:34.335365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:32.210 [2024-11-19 17:46:34.335373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:32.210 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.210 17:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3639814 00:28:33.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3639814) - No such process 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.149 { 00:28:33.149 "params": { 00:28:33.149 "name": "Nvme$subsystem", 00:28:33.149 "trtype": "$TEST_TRANSPORT", 00:28:33.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.149 "adrfam": "ipv4", 00:28:33.149 "trsvcid": "$NVMF_PORT", 00:28:33.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.149 "hdgst": ${hdgst:-false}, 00:28:33.149 "ddgst": ${ddgst:-false} 00:28:33.149 }, 00:28:33.149 "method": "bdev_nvme_attach_controller" 00:28:33.149 } 00:28:33.149 EOF 00:28:33.149 )") 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:33.149 17:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:33.149 "params": { 00:28:33.149 "name": "Nvme0", 00:28:33.149 "trtype": "tcp", 00:28:33.149 "traddr": "10.0.0.2", 00:28:33.149 "adrfam": "ipv4", 00:28:33.149 "trsvcid": "4420", 00:28:33.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.149 "hdgst": false, 00:28:33.149 "ddgst": false 00:28:33.149 }, 00:28:33.149 "method": "bdev_nvme_attach_controller" 00:28:33.149 }' 00:28:33.408 [2024-11-19 17:46:35.397831] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:28:33.408 [2024-11-19 17:46:35.397877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640065 ] 00:28:33.408 [2024-11-19 17:46:35.474866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.408 [2024-11-19 17:46:35.514290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.667 Running I/O for 1 seconds... 00:28:34.605 1984.00 IOPS, 124.00 MiB/s 00:28:34.605 Latency(us) 00:28:34.605 [2024-11-19T16:46:36.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.605 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.605 Verification LBA range: start 0x0 length 0x400 00:28:34.605 Nvme0n1 : 1.02 2017.26 126.08 0.00 0.00 31221.92 5356.86 27582.11 00:28:34.605 [2024-11-19T16:46:36.828Z] =================================================================================================================== 00:28:34.605 [2024-11-19T16:46:36.828Z] Total : 2017.26 126.08 0.00 0.00 31221.92 5356.86 27582.11 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.865 17:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.865 rmmod nvme_tcp 00:28:34.865 rmmod nvme_fabrics 00:28:34.865 rmmod nvme_keyring 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3639769 ']' 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3639769 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3639769 ']' 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3639769 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.865 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639769 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639769' 00:28:35.125 killing process with pid 3639769 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3639769 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3639769 00:28:35.125 [2024-11-19 17:46:37.250025] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.125 17:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:37.663 00:28:37.663 real 0m12.389s 00:28:37.663 user 0m18.361s 00:28:37.663 sys 0m6.254s 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.663 ************************************ 00:28:37.663 END TEST nvmf_host_management 00:28:37.663 ************************************ 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.663 ************************************ 00:28:37.663 START TEST nvmf_lvol 00:28:37.663 ************************************ 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:37.663 * Looking for test storage... 00:28:37.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.663 --rc genhtml_branch_coverage=1 00:28:37.663 --rc genhtml_function_coverage=1 00:28:37.663 --rc genhtml_legend=1 00:28:37.663 --rc geninfo_all_blocks=1 00:28:37.663 --rc geninfo_unexecuted_blocks=1 00:28:37.663 00:28:37.663 ' 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.663 --rc genhtml_branch_coverage=1 00:28:37.663 --rc genhtml_function_coverage=1 00:28:37.663 --rc genhtml_legend=1 00:28:37.663 --rc geninfo_all_blocks=1 00:28:37.663 --rc geninfo_unexecuted_blocks=1 00:28:37.663 00:28:37.663 ' 00:28:37.663 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.663 --rc genhtml_branch_coverage=1 00:28:37.663 --rc genhtml_function_coverage=1 00:28:37.663 --rc genhtml_legend=1 00:28:37.663 --rc geninfo_all_blocks=1 00:28:37.663 --rc geninfo_unexecuted_blocks=1 00:28:37.663 00:28:37.663 ' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.664 --rc genhtml_branch_coverage=1 00:28:37.664 --rc genhtml_function_coverage=1 00:28:37.664 --rc genhtml_legend=1 00:28:37.664 --rc geninfo_all_blocks=1 00:28:37.664 --rc geninfo_unexecuted_blocks=1 00:28:37.664 00:28:37.664 ' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.664 17:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:44.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:44.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.232 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:44.233 Found net devices under 0000:86:00.0: cvl_0_0 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:44.233 Found net devices under 0000:86:00.1: cvl_0_1 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:28:44.233 00:28:44.233 --- 10.0.0.2 ping statistics --- 00:28:44.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.233 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:28:44.233 00:28:44.233 --- 10.0.0.1 ping statistics --- 00:28:44.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.233 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3643824 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3643824 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3643824 ']' 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.233 17:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:44.233 [2024-11-19 17:46:45.575719] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:44.233 [2024-11-19 17:46:45.576706] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:28:44.233 [2024-11-19 17:46:45.576744] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.233 [2024-11-19 17:46:45.659286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.233 [2024-11-19 17:46:45.700301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.233 [2024-11-19 17:46:45.700333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.233 [2024-11-19 17:46:45.700340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.233 [2024-11-19 17:46:45.700345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.233 [2024-11-19 17:46:45.700350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.233 [2024-11-19 17:46:45.701748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.233 [2024-11-19 17:46:45.701853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.233 [2024-11-19 17:46:45.701855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.233 [2024-11-19 17:46:45.770120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:44.233 [2024-11-19 17:46:45.770982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:44.233 [2024-11-19 17:46:45.771056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:44.233 [2024-11-19 17:46:45.771231] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:44.233 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.234 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:44.234 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:44.234 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.234 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:44.493 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.493 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:44.493 [2024-11-19 17:46:46.638706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.493 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:44.752 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:44.752 17:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:45.011 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:45.011 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:45.270 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:45.529 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=40423a1a-9396-4890-8d36-7fb3f8c10a97 00:28:45.529 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40423a1a-9396-4890-8d36-7fb3f8c10a97 lvol 20 00:28:45.529 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3854f747-615e-4860-86c0-82d99c58c67e 00:28:45.529 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:45.787 17:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3854f747-615e-4860-86c0-82d99c58c67e 00:28:46.047 17:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.047 [2024-11-19 17:46:48.262588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.306 17:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.306 17:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3644328 00:28:46.306 17:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:46.306 17:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:47.682 17:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3854f747-615e-4860-86c0-82d99c58c67e MY_SNAPSHOT 00:28:47.682 17:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=064267a8-97d5-4ad5-bd12-d208000a73f7 00:28:47.682 17:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3854f747-615e-4860-86c0-82d99c58c67e 30 00:28:47.940 17:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 064267a8-97d5-4ad5-bd12-d208000a73f7 MY_CLONE 00:28:48.200 17:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c061b6cf-e695-4d0f-bb5f-c4050b50dfc6 00:28:48.200 17:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c061b6cf-e695-4d0f-bb5f-c4050b50dfc6 00:28:48.768 17:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3644328 00:28:56.890 Initializing NVMe Controllers 00:28:56.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:56.890 Controller IO queue size 128, less than required. 00:28:56.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:56.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:56.890 Initialization complete. Launching workers. 00:28:56.890 ======================================================== 00:28:56.890 Latency(us) 00:28:56.890 Device Information : IOPS MiB/s Average min max 00:28:56.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12037.30 47.02 10634.50 490.83 59377.64 00:28:56.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11917.50 46.55 10743.50 1893.69 68840.29 00:28:56.891 ======================================================== 00:28:56.891 Total : 23954.80 93.57 10688.73 490.83 68840.29 00:28:56.891 00:28:56.891 17:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:57.161 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3854f747-615e-4860-86c0-82d99c58c67e 00:28:57.161 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40423a1a-9396-4890-8d36-7fb3f8c10a97 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.425 rmmod nvme_tcp 00:28:57.425 rmmod nvme_fabrics 00:28:57.425 rmmod nvme_keyring 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3643824 ']' 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3643824 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3643824 ']' 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3643824 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.425 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3643824 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3643824' 00:28:57.683 killing process with pid 3643824 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3643824 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3643824 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.683 17:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.217 00:29:00.217 real 0m22.497s 00:29:00.217 user 0m55.841s 00:29:00.217 sys 0m10.041s 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:00.217 ************************************ 00:29:00.217 END TEST nvmf_lvol 00:29:00.217 ************************************ 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.217 17:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:00.217 ************************************ 00:29:00.217 START TEST nvmf_lvs_grow 00:29:00.217 ************************************ 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:00.217 * Looking for test storage... 00:29:00.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:00.217 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.218 --rc genhtml_branch_coverage=1 00:29:00.218 --rc genhtml_function_coverage=1 00:29:00.218 --rc genhtml_legend=1 00:29:00.218 --rc geninfo_all_blocks=1 00:29:00.218 --rc geninfo_unexecuted_blocks=1 00:29:00.218 00:29:00.218 ' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.218 --rc genhtml_branch_coverage=1 00:29:00.218 --rc genhtml_function_coverage=1 00:29:00.218 --rc genhtml_legend=1 00:29:00.218 --rc geninfo_all_blocks=1 00:29:00.218 --rc geninfo_unexecuted_blocks=1 00:29:00.218 00:29:00.218 ' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.218 --rc genhtml_branch_coverage=1 00:29:00.218 --rc genhtml_function_coverage=1 00:29:00.218 --rc genhtml_legend=1 00:29:00.218 --rc geninfo_all_blocks=1 00:29:00.218 --rc geninfo_unexecuted_blocks=1 00:29:00.218 00:29:00.218 ' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.218 --rc genhtml_branch_coverage=1 00:29:00.218 --rc genhtml_function_coverage=1 00:29:00.218 --rc genhtml_legend=1 00:29:00.218 --rc geninfo_all_blocks=1 00:29:00.218 --rc geninfo_unexecuted_blocks=1 00:29:00.218 00:29:00.218 ' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.218 17:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:06.788 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:06.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:06.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:06.789 Found net devices under 0000:86:00.0: cvl_0_0 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:06.789 Found net devices under 0000:86:00.1: cvl_0_1 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.789 17:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:29:06.789 00:29:06.789 --- 10.0.0.2 ping statistics --- 00:29:06.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.789 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:06.789 00:29:06.789 --- 10.0.0.1 ping statistics --- 00:29:06.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.789 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.789 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3649685 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3649685 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3649685 ']' 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 [2024-11-19 17:47:08.193416] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:06.790 [2024-11-19 17:47:08.194316] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:06.790 [2024-11-19 17:47:08.194346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.790 [2024-11-19 17:47:08.273254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.790 [2024-11-19 17:47:08.314389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.790 [2024-11-19 17:47:08.314427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.790 [2024-11-19 17:47:08.314434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.790 [2024-11-19 17:47:08.314441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.790 [2024-11-19 17:47:08.314446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.790 [2024-11-19 17:47:08.315012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.790 [2024-11-19 17:47:08.382573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:06.790 [2024-11-19 17:47:08.382789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:06.790 [2024-11-19 17:47:08.619661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 ************************************ 00:29:06.790 START TEST lvs_grow_clean 00:29:06.790 ************************************ 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:06.790 17:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:07.049 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:07.049 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:07.049 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:07.307 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:07.307 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:07.307 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 lvol 150 00:29:07.307 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=38298c69-554e-4b02-b8ae-3fa32765d2db 00:29:07.307 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:07.307 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:07.567 [2024-11-19 17:47:09.683413] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:07.567 [2024-11-19 17:47:09.683546] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:07.567 true 00:29:07.567 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:07.567 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:07.826 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:07.826 17:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:08.085 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38298c69-554e-4b02-b8ae-3fa32765d2db 00:29:08.085 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:08.344 [2024-11-19 17:47:10.483903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.344 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3650188 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3650188 /var/tmp/bdevperf.sock 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3650188 ']' 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.603 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.603 [2024-11-19 17:47:10.735544] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:08.603 [2024-11-19 17:47:10.735591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650188 ] 00:29:08.603 [2024-11-19 17:47:10.810397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.863 [2024-11-19 17:47:10.853638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.863 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.863 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:08.863 17:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:09.431 Nvme0n1 00:29:09.431 17:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:09.431 [ 00:29:09.431 { 00:29:09.431 "name": "Nvme0n1", 00:29:09.431 "aliases": [ 00:29:09.431 "38298c69-554e-4b02-b8ae-3fa32765d2db" 00:29:09.431 ], 00:29:09.431 "product_name": "NVMe disk", 00:29:09.431 "block_size": 4096, 00:29:09.431 "num_blocks": 38912, 00:29:09.431 "uuid": "38298c69-554e-4b02-b8ae-3fa32765d2db", 00:29:09.431 "numa_id": 1, 00:29:09.431 "assigned_rate_limits": { 00:29:09.431 "rw_ios_per_sec": 0, 00:29:09.431 "rw_mbytes_per_sec": 0, 00:29:09.431 "r_mbytes_per_sec": 0, 00:29:09.431 "w_mbytes_per_sec": 0 00:29:09.431 }, 00:29:09.431 "claimed": false, 00:29:09.431 "zoned": false, 00:29:09.431 "supported_io_types": { 00:29:09.431 "read": true, 00:29:09.431 "write": true, 00:29:09.431 "unmap": true, 00:29:09.431 "flush": true, 00:29:09.431 "reset": true, 00:29:09.431 "nvme_admin": true, 00:29:09.431 "nvme_io": true, 00:29:09.431 "nvme_io_md": false, 00:29:09.431 "write_zeroes": true, 00:29:09.431 "zcopy": false, 00:29:09.431 "get_zone_info": false, 00:29:09.431 "zone_management": false, 00:29:09.431 "zone_append": false, 00:29:09.431 "compare": true, 00:29:09.432 "compare_and_write": true, 00:29:09.432 "abort": true, 00:29:09.432 "seek_hole": false, 00:29:09.432 "seek_data": false, 00:29:09.432 "copy": true, 00:29:09.432 "nvme_iov_md": false 00:29:09.432 }, 00:29:09.432 "memory_domains": [ 00:29:09.432 { 00:29:09.432 "dma_device_id": "system", 00:29:09.432 "dma_device_type": 1 00:29:09.432 } 00:29:09.432 ], 00:29:09.432 "driver_specific": { 00:29:09.432 "nvme": [ 00:29:09.432 { 00:29:09.432 "trid": { 00:29:09.432 "trtype": "TCP", 00:29:09.432 "adrfam": "IPv4", 00:29:09.432 "traddr": "10.0.0.2", 00:29:09.432 "trsvcid": "4420", 00:29:09.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:09.432 }, 00:29:09.432 "ctrlr_data": { 00:29:09.432 "cntlid": 1, 00:29:09.432 "vendor_id": "0x8086", 00:29:09.432 "model_number": "SPDK bdev Controller", 00:29:09.432 "serial_number": "SPDK0", 00:29:09.432 "firmware_revision": "25.01", 00:29:09.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.432 "oacs": { 00:29:09.432 "security": 0, 00:29:09.432 "format": 0, 00:29:09.432 "firmware": 0, 00:29:09.432 "ns_manage": 0 00:29:09.432 }, 00:29:09.432 "multi_ctrlr": true, 00:29:09.432 "ana_reporting": false 00:29:09.432 }, 00:29:09.432 "vs": { 00:29:09.432 "nvme_version": "1.3" 00:29:09.432 }, 00:29:09.432 "ns_data": { 00:29:09.432 "id": 1, 00:29:09.432 "can_share": true 00:29:09.432 } 00:29:09.432 } 00:29:09.432 ], 00:29:09.432 "mp_policy": "active_passive" 00:29:09.432 } 00:29:09.432 } 00:29:09.432 ] 00:29:09.432 17:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3650196 00:29:09.432 17:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:09.432 17:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:09.691 Running I/O for 10 seconds... 00:29:10.630 Latency(us) 00:29:10.631 [2024-11-19T16:47:12.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.631 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:10.631 [2024-11-19T16:47:12.854Z] =================================================================================================================== 00:29:10.631 [2024-11-19T16:47:12.854Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:10.631 00:29:11.569 17:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:11.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.569 Nvme0n1 : 2.00 22638.00 88.43 0.00 0.00 0.00 0.00 0.00 00:29:11.569 [2024-11-19T16:47:13.792Z] =================================================================================================================== 00:29:11.569 [2024-11-19T16:47:13.792Z] Total : 22638.00 88.43 0.00 0.00 0.00 0.00 0.00 00:29:11.569 00:29:11.569 true 00:29:11.569 17:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:11.569 17:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:11.828 17:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:11.828 17:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:11.828 17:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3650196 00:29:12.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.766 Nvme0n1 : 3.00 22754.33 88.88 0.00 0.00 0.00 0.00 0.00 00:29:12.766 [2024-11-19T16:47:14.989Z] =================================================================================================================== 00:29:12.766 [2024-11-19T16:47:14.989Z] Total : 22754.33 88.88 0.00 0.00 0.00 0.00 0.00 00:29:12.766 00:29:13.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.704 Nvme0n1 : 4.00 22844.25 89.24 0.00 0.00 0.00 0.00 0.00 00:29:13.704 [2024-11-19T16:47:15.927Z] =================================================================================================================== 00:29:13.704 [2024-11-19T16:47:15.927Z] Total : 22844.25 89.24 0.00 0.00 0.00 0.00 0.00 00:29:13.704 00:29:14.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.640 Nvme0n1 : 5.00 22911.00 89.50 0.00 0.00 0.00 0.00 0.00 00:29:14.640 [2024-11-19T16:47:16.863Z] =================================================================================================================== 00:29:14.640 [2024-11-19T16:47:16.863Z] Total : 22911.00 89.50 0.00 0.00 0.00 0.00 0.00 00:29:14.640 00:29:15.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.577 Nvme0n1 : 6.00 22963.50 89.70 0.00 0.00 0.00 0.00 0.00 00:29:15.577 [2024-11-19T16:47:17.800Z] =================================================================================================================== 00:29:15.577 [2024-11-19T16:47:17.800Z] Total : 22963.50 89.70 0.00 0.00 0.00 0.00 0.00 00:29:15.577 00:29:16.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.577 Nvme0n1 : 7.00 22985.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:16.577 [2024-11-19T16:47:18.800Z] =================================================================================================================== 00:29:16.577 [2024-11-19T16:47:18.800Z] Total : 22985.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:16.577 00:29:17.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.514 Nvme0n1 : 8.00 23017.00 89.91 0.00 0.00 0.00 0.00 0.00 00:29:17.514 [2024-11-19T16:47:19.737Z] =================================================================================================================== 00:29:17.514 [2024-11-19T16:47:19.737Z] Total : 23017.00 89.91 0.00 0.00 0.00 0.00 0.00 00:29:17.514 00:29:18.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.898 Nvme0n1 : 9.00 22985.44 89.79 0.00 0.00 0.00 0.00 0.00 00:29:18.898 [2024-11-19T16:47:21.121Z] =================================================================================================================== 00:29:18.898 [2024-11-19T16:47:21.122Z] Total : 22985.44 89.79 0.00 0.00 0.00 0.00 0.00 00:29:18.899 00:29:19.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.467 Nvme0n1 : 10.00 22998.30 89.84 0.00 0.00 0.00 0.00 0.00 00:29:19.467 [2024-11-19T16:47:21.690Z] =================================================================================================================== 00:29:19.467 [2024-11-19T16:47:21.690Z] Total : 22998.30 89.84 0.00 0.00 0.00 0.00 0.00 00:29:19.467 00:29:19.727 00:29:19.727 Latency(us) 00:29:19.727 [2024-11-19T16:47:21.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.727 Nvme0n1 : 10.00 23004.22 89.86 0.00 0.00 5561.29 3191.32 27240.18 00:29:19.727 [2024-11-19T16:47:21.950Z] =================================================================================================================== 00:29:19.727 [2024-11-19T16:47:21.950Z] Total : 23004.22 89.86 0.00 0.00 5561.29 3191.32 27240.18 00:29:19.727 { 00:29:19.727 "results": [ 00:29:19.727 { 00:29:19.727 "job": "Nvme0n1", 00:29:19.727 "core_mask": "0x2", 00:29:19.727 "workload": "randwrite", 00:29:19.727 "status": "finished", 00:29:19.727 "queue_depth": 128, 00:29:19.727 "io_size": 4096, 00:29:19.727 "runtime": 10.002991, 00:29:19.727 "iops": 23004.219437966105, 00:29:19.727 "mibps": 89.8602321795551, 00:29:19.727 "io_failed": 0, 00:29:19.727 "io_timeout": 0, 00:29:19.727 "avg_latency_us": 5561.289457575578, 00:29:19.727 "min_latency_us": 3191.318260869565, 00:29:19.727 "max_latency_us": 27240.180869565218 00:29:19.727 } 00:29:19.727 ], 00:29:19.727 "core_count": 1 00:29:19.727 } 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3650188 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3650188 ']' 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3650188 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3650188 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3650188' 00:29:19.727 killing process with pid 3650188 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3650188 00:29:19.727 Received shutdown signal, test time was about 10.000000 seconds 00:29:19.727 00:29:19.727 Latency(us) 00:29:19.727 [2024-11-19T16:47:21.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.727 [2024-11-19T16:47:21.950Z] =================================================================================================================== 00:29:19.727 [2024-11-19T16:47:21.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3650188 00:29:19.727 17:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.987 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:20.246 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:20.246 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:20.505 [2024-11-19 17:47:22.675480] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:20.505 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:20.764 request: 00:29:20.764 { 00:29:20.764 "uuid": "9c207d1b-e879-4a68-8feb-fbc3436059b1", 00:29:20.764 "method": "bdev_lvol_get_lvstores", 00:29:20.764 "req_id": 1 00:29:20.764 } 00:29:20.764 Got JSON-RPC error response 00:29:20.764 response: 00:29:20.764 { 00:29:20.764 "code": -19, 00:29:20.764 "message": "No such device" 00:29:20.764 } 00:29:20.765 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:20.765 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:20.765 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:20.765 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:20.765 17:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:21.024 aio_bdev 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 38298c69-554e-4b02-b8ae-3fa32765d2db 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=38298c69-554e-4b02-b8ae-3fa32765d2db 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:21.024 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:21.283 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38298c69-554e-4b02-b8ae-3fa32765d2db -t 2000 00:29:21.283 [ 00:29:21.283 { 00:29:21.283 "name": "38298c69-554e-4b02-b8ae-3fa32765d2db", 00:29:21.283 "aliases": [ 00:29:21.283 "lvs/lvol" 00:29:21.283 ], 00:29:21.283 "product_name": "Logical Volume", 00:29:21.283 "block_size": 4096, 00:29:21.283 "num_blocks": 38912, 00:29:21.283 "uuid": "38298c69-554e-4b02-b8ae-3fa32765d2db", 00:29:21.283 "assigned_rate_limits": { 00:29:21.283 "rw_ios_per_sec": 0, 00:29:21.283 "rw_mbytes_per_sec": 0, 00:29:21.283 "r_mbytes_per_sec": 0, 00:29:21.283 "w_mbytes_per_sec": 0 00:29:21.283 }, 00:29:21.283 "claimed": false, 00:29:21.283 "zoned": false, 00:29:21.283 "supported_io_types": { 00:29:21.283 "read": true, 00:29:21.283 "write": true, 00:29:21.283 "unmap": true, 00:29:21.283 "flush": false, 00:29:21.283 "reset": true, 00:29:21.283 "nvme_admin": false, 00:29:21.283 "nvme_io": false, 00:29:21.283 "nvme_io_md": false, 00:29:21.283 "write_zeroes": true, 00:29:21.283 "zcopy": false, 00:29:21.283 "get_zone_info": false, 00:29:21.283 "zone_management": false, 00:29:21.283 "zone_append": false, 00:29:21.283 "compare": false, 00:29:21.283 "compare_and_write": false, 00:29:21.283 "abort": false, 00:29:21.283 "seek_hole": true, 00:29:21.283 "seek_data": true, 00:29:21.283 "copy": false, 00:29:21.283 "nvme_iov_md": false 00:29:21.283 }, 00:29:21.283 "driver_specific": { 00:29:21.283 "lvol": { 00:29:21.283 "lvol_store_uuid": "9c207d1b-e879-4a68-8feb-fbc3436059b1", 00:29:21.283 "base_bdev": "aio_bdev", 00:29:21.283 "thin_provision": false, 00:29:21.283 "num_allocated_clusters": 38, 00:29:21.283 "snapshot": false, 00:29:21.283 "clone": false, 00:29:21.283 "esnap_clone": false 00:29:21.283 } 00:29:21.283 } 00:29:21.283 } 00:29:21.283 ] 00:29:21.283 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:21.283 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:21.283 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:21.542 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:21.542 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:21.542 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:21.802 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:21.802 17:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38298c69-554e-4b02-b8ae-3fa32765d2db 00:29:22.060 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c207d1b-e879-4a68-8feb-fbc3436059b1 00:29:22.060 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.320 00:29:22.320 real 0m15.817s 00:29:22.320 user 0m15.301s 00:29:22.320 sys 0m1.524s 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:22.320 ************************************ 00:29:22.320 END TEST lvs_grow_clean 00:29:22.320 ************************************ 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.320 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.579 ************************************ 00:29:22.579 START TEST lvs_grow_dirty 00:29:22.579 ************************************ 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:22.579 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:22.838 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:22.838 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:22.838 17:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:23.097 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:23.097 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:23.097 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea44448c-31fd-4e12-983f-ea9d961183b5 lvol 150 00:29:23.356 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:23.356 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.356 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:23.356 [2024-11-19 17:47:25.539408] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:23.356 [2024-11-19 17:47:25.539538] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:23.356 true 00:29:23.356 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:23.356 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:23.616 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:23.616 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:23.875 17:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:24.134 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.134 [2024-11-19 17:47:26.311836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.134 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.392 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:24.392 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3652771 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3652771 /var/tmp/bdevperf.sock 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3652771 ']' 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.393 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:24.393 [2024-11-19 17:47:26.572917] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:24.393 [2024-11-19 17:47:26.572971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652771 ] 00:29:24.651 [2024-11-19 17:47:26.645752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.652 [2024-11-19 17:47:26.688446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.652 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.652 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:24.652 17:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:25.220 Nvme0n1 00:29:25.220 17:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:25.220 [ 00:29:25.220 { 00:29:25.220 "name": "Nvme0n1", 00:29:25.220 "aliases": [ 00:29:25.220 "11b3c8b6-2576-4935-aa23-04fbfe99c8c1" 00:29:25.220 ], 00:29:25.220 "product_name": "NVMe disk", 00:29:25.220 "block_size": 4096, 00:29:25.220 "num_blocks": 38912, 00:29:25.220 "uuid": "11b3c8b6-2576-4935-aa23-04fbfe99c8c1", 00:29:25.220 "numa_id": 1, 00:29:25.220 "assigned_rate_limits": { 00:29:25.220 "rw_ios_per_sec": 0, 00:29:25.220 "rw_mbytes_per_sec": 0, 00:29:25.220 "r_mbytes_per_sec": 0, 00:29:25.220 "w_mbytes_per_sec": 0 00:29:25.220 }, 00:29:25.220 "claimed": false, 00:29:25.220 "zoned": false, 00:29:25.220 "supported_io_types": { 00:29:25.220 "read": true, 00:29:25.220 "write": true, 00:29:25.220 "unmap": true, 00:29:25.220 "flush": true, 00:29:25.220 "reset": true, 00:29:25.220 "nvme_admin": true, 00:29:25.220 "nvme_io": true, 00:29:25.220 "nvme_io_md": false, 00:29:25.220 "write_zeroes": true, 00:29:25.220 "zcopy": false, 00:29:25.220 "get_zone_info": false, 00:29:25.220 "zone_management": false, 00:29:25.220 "zone_append": false, 00:29:25.220 "compare": true, 00:29:25.220 "compare_and_write": true, 00:29:25.220 "abort": true, 00:29:25.220 "seek_hole": false, 00:29:25.220 "seek_data": false, 00:29:25.220 "copy": true, 00:29:25.220 "nvme_iov_md": false 00:29:25.220 }, 00:29:25.220 "memory_domains": [ 00:29:25.220 { 00:29:25.220 "dma_device_id": "system", 00:29:25.220 "dma_device_type": 1 00:29:25.220 } 00:29:25.220 ], 00:29:25.220 "driver_specific": { 00:29:25.220 "nvme": [ 00:29:25.220 { 00:29:25.220 "trid": { 00:29:25.220 "trtype": "TCP", 00:29:25.220 "adrfam": "IPv4", 00:29:25.220 "traddr": "10.0.0.2", 00:29:25.220 "trsvcid": "4420", 00:29:25.220 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:25.220 }, 00:29:25.220 "ctrlr_data": { 00:29:25.220 "cntlid": 1, 00:29:25.220 "vendor_id": "0x8086", 00:29:25.220 "model_number": "SPDK bdev Controller", 00:29:25.220 "serial_number": "SPDK0", 00:29:25.220 "firmware_revision": "25.01", 00:29:25.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.220 "oacs": { 00:29:25.220 "security": 0, 00:29:25.220 "format": 0, 00:29:25.220 "firmware": 0, 00:29:25.220 "ns_manage": 0 00:29:25.220 }, 00:29:25.220 "multi_ctrlr": true, 00:29:25.220 "ana_reporting": false 00:29:25.220 }, 00:29:25.220 "vs": { 00:29:25.220 "nvme_version": "1.3" 00:29:25.220 }, 00:29:25.220 "ns_data": { 00:29:25.220 "id": 1, 00:29:25.220 "can_share": true 00:29:25.220 } 00:29:25.221 } 00:29:25.221 ], 00:29:25.221 "mp_policy": "active_passive" 00:29:25.221 } 00:29:25.221 } 00:29:25.221 ] 00:29:25.221 17:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:25.221 17:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3652782 00:29:25.221 17:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:25.479 Running I/O for 10 seconds... 00:29:26.416 Latency(us) 00:29:26.416 [2024-11-19T16:47:28.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.416 Nvme0n1 : 1.00 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:29:26.416 [2024-11-19T16:47:28.639Z] =================================================================================================================== 00:29:26.416 [2024-11-19T16:47:28.639Z] Total : 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:29:26.416 00:29:27.353 17:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:27.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.353 Nvme0n1 : 2.00 22440.00 87.66 0.00 0.00 0.00 0.00 0.00 00:29:27.353 [2024-11-19T16:47:29.576Z] =================================================================================================================== 00:29:27.353 [2024-11-19T16:47:29.576Z] Total : 22440.00 87.66 0.00 0.00 0.00 0.00 0.00 00:29:27.353 00:29:27.612 true 00:29:27.612 17:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:27.612 17:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:27.612 17:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:27.612 17:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:27.612 17:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3652782 00:29:28.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.550 Nvme0n1 : 3.00 22432.00 87.62 0.00 0.00 0.00 0.00 0.00 00:29:28.550 [2024-11-19T16:47:30.773Z] =================================================================================================================== 00:29:28.550 [2024-11-19T16:47:30.773Z] Total : 22432.00 87.62 0.00 0.00 0.00 0.00 0.00 00:29:28.550 00:29:29.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.486 Nvme0n1 : 4.00 22515.75 87.95 0.00 0.00 0.00 0.00 0.00 00:29:29.486 [2024-11-19T16:47:31.710Z] =================================================================================================================== 00:29:29.487 [2024-11-19T16:47:31.710Z] Total : 22515.75 87.95 0.00 0.00 0.00 0.00 0.00 00:29:29.487 00:29:30.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.424 Nvme0n1 : 5.00 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:29:30.424 [2024-11-19T16:47:32.647Z] =================================================================================================================== 00:29:30.424 [2024-11-19T16:47:32.647Z] Total : 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:29:30.424 00:29:31.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.361 Nvme0n1 : 6.00 22670.33 88.56 0.00 0.00 0.00 0.00 0.00 00:29:31.361 [2024-11-19T16:47:33.584Z] =================================================================================================================== 00:29:31.361 [2024-11-19T16:47:33.584Z] Total : 22670.33 88.56 0.00 0.00 0.00 0.00 0.00 00:29:31.361 00:29:32.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.298 Nvme0n1 : 7.00 22724.71 88.77 0.00 0.00 0.00 0.00 0.00 00:29:32.298 [2024-11-19T16:47:34.521Z] =================================================================================================================== 00:29:32.298 [2024-11-19T16:47:34.521Z] Total : 22724.71 88.77 0.00 0.00 0.00 0.00 0.00 00:29:32.298 00:29:33.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.783 Nvme0n1 : 8.00 22761.75 88.91 0.00 0.00 0.00 0.00 0.00 00:29:33.783 [2024-11-19T16:47:36.006Z] =================================================================================================================== 00:29:33.783 [2024-11-19T16:47:36.007Z] Total : 22761.75 88.91 0.00 0.00 0.00 0.00 0.00 00:29:33.784 00:29:34.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.352 Nvme0n1 : 9.00 22786.78 89.01 0.00 0.00 0.00 0.00 0.00 00:29:34.352 [2024-11-19T16:47:36.575Z] =================================================================================================================== 00:29:34.352 [2024-11-19T16:47:36.575Z] Total : 22786.78 89.01 0.00 0.00 0.00 0.00 0.00 00:29:34.352 00:29:35.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.732 Nvme0n1 : 10.00 22813.20 89.11 0.00 0.00 0.00 0.00 0.00 00:29:35.732 [2024-11-19T16:47:37.955Z] =================================================================================================================== 00:29:35.732 [2024-11-19T16:47:37.955Z] Total : 22813.20 89.11 0.00 0.00 0.00 0.00 0.00 00:29:35.732 00:29:35.732 00:29:35.732 Latency(us) 00:29:35.732 [2024-11-19T16:47:37.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.732 Nvme0n1 : 10.00 22813.67 89.12 0.00 0.00 5607.48 2621.44 27354.16 00:29:35.732 [2024-11-19T16:47:37.955Z] =================================================================================================================== 00:29:35.732 [2024-11-19T16:47:37.955Z] Total : 22813.67 89.12 0.00 0.00 5607.48 2621.44 27354.16 00:29:35.732 { 00:29:35.732 "results": [ 00:29:35.732 { 00:29:35.732 "job": "Nvme0n1", 00:29:35.732 "core_mask": "0x2", 00:29:35.732 "workload": "randwrite", 00:29:35.732 "status": "finished", 00:29:35.732 "queue_depth": 128, 00:29:35.732 "io_size": 4096, 00:29:35.732 "runtime": 10.002599, 00:29:35.732 "iops": 22813.67072697806, 00:29:35.732 "mibps": 89.11590127725805, 00:29:35.732 "io_failed": 0, 00:29:35.732 "io_timeout": 0, 00:29:35.732 "avg_latency_us": 5607.484987657444, 00:29:35.732 "min_latency_us": 2621.44, 00:29:35.732 "max_latency_us": 27354.15652173913 00:29:35.732 } 00:29:35.732 ], 00:29:35.732 "core_count": 1 00:29:35.732 } 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3652771 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3652771 ']' 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3652771 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652771 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652771' 00:29:35.732 killing process with pid 3652771 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3652771 00:29:35.732 Received shutdown signal, test time was about 10.000000 seconds 00:29:35.732 00:29:35.732 Latency(us) 00:29:35.732 [2024-11-19T16:47:37.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.732 [2024-11-19T16:47:37.955Z] =================================================================================================================== 00:29:35.732 [2024-11-19T16:47:37.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3652771 00:29:35.732 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.992 17:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.992 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:35.992 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3649685 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3649685 00:29:36.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3649685 Killed "${NVMF_APP[@]}" "$@" 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3654621 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3654621 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3654621 ']' 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.251 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.511 [2024-11-19 17:47:38.473290] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:36.511 [2024-11-19 17:47:38.474219] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:36.511 [2024-11-19 17:47:38.474256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.511 [2024-11-19 17:47:38.554743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.511 [2024-11-19 17:47:38.595405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.511 [2024-11-19 17:47:38.595438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.511 [2024-11-19 17:47:38.595445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.511 [2024-11-19 17:47:38.595451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.511 [2024-11-19 17:47:38.595456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.511 [2024-11-19 17:47:38.595984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.511 [2024-11-19 17:47:38.662394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:36.511 [2024-11-19 17:47:38.662605] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.511 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:36.770 [2024-11-19 17:47:38.905325] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:36.770 [2024-11-19 17:47:38.905534] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:36.770 [2024-11-19 17:47:38.905624] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:36.770 17:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:37.029 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11b3c8b6-2576-4935-aa23-04fbfe99c8c1 -t 2000 00:29:37.288 [ 00:29:37.288 { 00:29:37.288 "name": "11b3c8b6-2576-4935-aa23-04fbfe99c8c1", 00:29:37.288 "aliases": [ 00:29:37.288 "lvs/lvol" 00:29:37.288 ], 00:29:37.288 "product_name": "Logical Volume", 00:29:37.288 "block_size": 4096, 00:29:37.288 "num_blocks": 38912, 00:29:37.288 "uuid": "11b3c8b6-2576-4935-aa23-04fbfe99c8c1", 00:29:37.288 "assigned_rate_limits": { 00:29:37.288 "rw_ios_per_sec": 0, 00:29:37.288 "rw_mbytes_per_sec": 0, 00:29:37.288 "r_mbytes_per_sec": 0, 00:29:37.288 "w_mbytes_per_sec": 0 00:29:37.288 }, 00:29:37.288 "claimed": false, 00:29:37.288 "zoned": false, 00:29:37.288 "supported_io_types": { 00:29:37.288 "read": true, 00:29:37.288 "write": true, 00:29:37.288 "unmap": true, 00:29:37.288 "flush": false, 00:29:37.288 "reset": true, 00:29:37.288 "nvme_admin": false, 00:29:37.288 "nvme_io": false, 00:29:37.288 "nvme_io_md": false, 00:29:37.288 "write_zeroes": true, 00:29:37.288 "zcopy": false, 00:29:37.288 "get_zone_info": false, 00:29:37.288 "zone_management": false, 00:29:37.288 "zone_append": false, 00:29:37.288 "compare": false, 00:29:37.288 "compare_and_write": false, 00:29:37.288 "abort": false, 00:29:37.288 "seek_hole": true, 00:29:37.288 "seek_data": true, 00:29:37.288 "copy": false, 00:29:37.288 "nvme_iov_md": false 00:29:37.288 }, 00:29:37.288 "driver_specific": { 00:29:37.288 "lvol": { 00:29:37.288 "lvol_store_uuid": "ea44448c-31fd-4e12-983f-ea9d961183b5", 00:29:37.288 "base_bdev": "aio_bdev", 00:29:37.288 "thin_provision": false, 00:29:37.288 "num_allocated_clusters": 38, 00:29:37.288 "snapshot": false, 00:29:37.288 "clone": false, 00:29:37.288 "esnap_clone": false 00:29:37.288 } 00:29:37.288 } 00:29:37.288 } 00:29:37.288 ] 00:29:37.288 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:37.288 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:37.288 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:37.547 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:37.547 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:37.547 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:37.547 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:37.547 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:37.806 [2024-11-19 17:47:39.888441] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.806 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.807 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.807 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:37.807 17:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:38.066 request: 00:29:38.066 { 00:29:38.066 "uuid": "ea44448c-31fd-4e12-983f-ea9d961183b5", 00:29:38.066 "method": "bdev_lvol_get_lvstores", 00:29:38.066 "req_id": 1 00:29:38.066 } 00:29:38.066 Got JSON-RPC error response 00:29:38.066 response: 00:29:38.066 { 00:29:38.066 "code": -19, 00:29:38.066 "message": "No such device" 00:29:38.066 } 00:29:38.066 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:38.066 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.066 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.066 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.066 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:38.324 aio_bdev 00:29:38.324 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:38.324 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:38.324 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:38.325 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:38.325 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:38.325 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:38.325 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:38.325 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11b3c8b6-2576-4935-aa23-04fbfe99c8c1 -t 2000 00:29:38.584 [ 00:29:38.584 { 00:29:38.584 "name": "11b3c8b6-2576-4935-aa23-04fbfe99c8c1", 00:29:38.584 "aliases": [ 00:29:38.584 "lvs/lvol" 00:29:38.584 ], 00:29:38.584 "product_name": "Logical Volume", 00:29:38.584 "block_size": 4096, 00:29:38.584 "num_blocks": 38912, 00:29:38.584 "uuid": "11b3c8b6-2576-4935-aa23-04fbfe99c8c1", 00:29:38.584 "assigned_rate_limits": { 00:29:38.584 "rw_ios_per_sec": 0, 00:29:38.584 "rw_mbytes_per_sec": 0, 00:29:38.584 "r_mbytes_per_sec": 0, 00:29:38.584 "w_mbytes_per_sec": 0 00:29:38.584 }, 00:29:38.584 "claimed": false, 00:29:38.584 "zoned": false, 00:29:38.584 "supported_io_types": { 00:29:38.584 "read": true, 00:29:38.584 "write": true, 00:29:38.584 "unmap": true, 00:29:38.584 "flush": false, 00:29:38.584 "reset": true, 00:29:38.584 "nvme_admin": false, 00:29:38.584 "nvme_io": false, 00:29:38.584 "nvme_io_md": false, 00:29:38.584 "write_zeroes": true, 00:29:38.584 "zcopy": false, 00:29:38.584 "get_zone_info": false, 00:29:38.584 "zone_management": false, 00:29:38.584 "zone_append": false, 00:29:38.584 "compare": false, 00:29:38.584 "compare_and_write": false, 00:29:38.584 "abort": false, 00:29:38.584 "seek_hole": true, 00:29:38.584 "seek_data": true, 00:29:38.584 "copy": false, 00:29:38.584 "nvme_iov_md": false 00:29:38.584 }, 00:29:38.584 "driver_specific": { 00:29:38.584 "lvol": { 00:29:38.584 "lvol_store_uuid": "ea44448c-31fd-4e12-983f-ea9d961183b5", 00:29:38.584 "base_bdev": "aio_bdev", 00:29:38.584 "thin_provision": false, 00:29:38.584 "num_allocated_clusters": 38, 00:29:38.584 "snapshot": false, 00:29:38.584 "clone": false, 00:29:38.584 "esnap_clone": false 00:29:38.584 } 00:29:38.584 } 00:29:38.584 } 00:29:38.584 ] 00:29:38.584 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:38.584 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:38.584 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:38.843 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:38.843 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:38.843 17:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.102 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.102 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11b3c8b6-2576-4935-aa23-04fbfe99c8c1 00:29:39.102 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea44448c-31fd-4e12-983f-ea9d961183b5 00:29:39.361 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:39.621 00:29:39.621 real 0m17.167s 00:29:39.621 user 0m34.659s 00:29:39.621 sys 0m3.722s 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 ************************************ 00:29:39.621 END TEST lvs_grow_dirty 00:29:39.621 ************************************ 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:39.621 nvmf_trace.0 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.621 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.621 rmmod nvme_tcp 00:29:39.880 rmmod nvme_fabrics 00:29:39.880 rmmod nvme_keyring 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3654621 ']' 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3654621 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3654621 ']' 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3654621 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3654621 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3654621' 00:29:39.880 killing process with pid 3654621 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3654621 00:29:39.880 17:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3654621 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.139 17:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.045 00:29:42.045 real 0m42.200s 00:29:42.045 user 0m52.488s 00:29:42.045 sys 0m10.159s 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:42.045 ************************************ 00:29:42.045 END TEST nvmf_lvs_grow 00:29:42.045 ************************************ 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.045 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:42.305 ************************************ 00:29:42.305 START TEST nvmf_bdev_io_wait 00:29:42.305 ************************************ 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:42.305 * Looking for test storage... 00:29:42.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.305 --rc genhtml_branch_coverage=1 00:29:42.305 --rc genhtml_function_coverage=1 00:29:42.305 --rc genhtml_legend=1 00:29:42.305 --rc geninfo_all_blocks=1 00:29:42.305 --rc geninfo_unexecuted_blocks=1 00:29:42.305 00:29:42.305 ' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.305 --rc genhtml_branch_coverage=1 00:29:42.305 --rc genhtml_function_coverage=1 00:29:42.305 --rc genhtml_legend=1 00:29:42.305 --rc geninfo_all_blocks=1 00:29:42.305 --rc geninfo_unexecuted_blocks=1 00:29:42.305 00:29:42.305 ' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.305 --rc genhtml_branch_coverage=1 00:29:42.305 --rc genhtml_function_coverage=1 00:29:42.305 --rc genhtml_legend=1 00:29:42.305 --rc geninfo_all_blocks=1 00:29:42.305 --rc geninfo_unexecuted_blocks=1 00:29:42.305 00:29:42.305 ' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.305 --rc genhtml_branch_coverage=1 00:29:42.305 --rc genhtml_function_coverage=1 00:29:42.305 --rc genhtml_legend=1 00:29:42.305 --rc geninfo_all_blocks=1 00:29:42.305 --rc geninfo_unexecuted_blocks=1 00:29:42.305 00:29:42.305 ' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.305 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.306 17:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:48.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:48.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:48.875 Found net devices under 0000:86:00.0: cvl_0_0 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.875 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:48.876 Found net devices under 0000:86:00.1: cvl_0_1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:29:48.876 00:29:48.876 --- 10.0.0.2 ping statistics --- 00:29:48.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.876 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:29:48.876 00:29:48.876 --- 10.0.0.1 ping statistics --- 00:29:48.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.876 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3658662 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3658662 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3658662 ']' 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.876 [2024-11-19 17:47:50.387272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.876 [2024-11-19 17:47:50.388281] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:48.876 [2024-11-19 17:47:50.388321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.876 [2024-11-19 17:47:50.467520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.876 [2024-11-19 17:47:50.512365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.876 [2024-11-19 17:47:50.512403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.876 [2024-11-19 17:47:50.512411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.876 [2024-11-19 17:47:50.512417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.876 [2024-11-19 17:47:50.512422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.876 [2024-11-19 17:47:50.513877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.876 [2024-11-19 17:47:50.513987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.876 [2024-11-19 17:47:50.514036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.876 [2024-11-19 17:47:50.514036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.876 [2024-11-19 17:47:50.514441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.876 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.877 [2024-11-19 17:47:50.647853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.877 [2024-11-19 17:47:50.648425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.877 [2024-11-19 17:47:50.648753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:48.877 [2024-11-19 17:47:50.648872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.877 [2024-11-19 17:47:50.658806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.877 Malloc0 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.877 [2024-11-19 17:47:50.731133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3658705 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3658707 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:48.877 { 00:29:48.877 "params": { 00:29:48.877 "name": "Nvme$subsystem", 00:29:48.877 "trtype": "$TEST_TRANSPORT", 00:29:48.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.877 "adrfam": "ipv4", 00:29:48.877 "trsvcid": "$NVMF_PORT", 00:29:48.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.877 "hdgst": ${hdgst:-false}, 00:29:48.877 "ddgst": ${ddgst:-false} 00:29:48.877 }, 00:29:48.877 "method": "bdev_nvme_attach_controller" 00:29:48.877 } 00:29:48.877 EOF 00:29:48.877 )") 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3658709 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:48.877 { 00:29:48.877 "params": { 00:29:48.877 "name": "Nvme$subsystem", 00:29:48.877 "trtype": "$TEST_TRANSPORT", 00:29:48.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.877 "adrfam": "ipv4", 00:29:48.877 "trsvcid": "$NVMF_PORT", 00:29:48.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.877 "hdgst": ${hdgst:-false}, 00:29:48.877 "ddgst": ${ddgst:-false} 00:29:48.877 }, 00:29:48.877 "method": "bdev_nvme_attach_controller" 00:29:48.877 } 00:29:48.877 EOF 00:29:48.877 )") 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3658712 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:48.877 { 00:29:48.877 "params": { 00:29:48.877 "name": "Nvme$subsystem", 00:29:48.877 "trtype": "$TEST_TRANSPORT", 00:29:48.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.877 "adrfam": "ipv4", 00:29:48.877 "trsvcid": "$NVMF_PORT", 00:29:48.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.877 "hdgst": ${hdgst:-false}, 00:29:48.877 "ddgst": ${ddgst:-false} 00:29:48.877 }, 00:29:48.877 "method": "bdev_nvme_attach_controller" 00:29:48.877 } 00:29:48.877 EOF 00:29:48.877 )") 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:48.877 { 00:29:48.877 "params": { 00:29:48.877 "name": "Nvme$subsystem", 00:29:48.877 "trtype": "$TEST_TRANSPORT", 00:29:48.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.877 "adrfam": "ipv4", 00:29:48.877 "trsvcid": "$NVMF_PORT", 00:29:48.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.877 "hdgst": ${hdgst:-false}, 00:29:48.877 "ddgst": ${ddgst:-false} 00:29:48.877 }, 00:29:48.877 "method": "bdev_nvme_attach_controller" 00:29:48.877 } 00:29:48.877 EOF 00:29:48.877 )") 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3658705 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:48.877 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:48.877 "params": { 00:29:48.877 "name": "Nvme1", 00:29:48.877 "trtype": "tcp", 00:29:48.877 "traddr": "10.0.0.2", 00:29:48.877 "adrfam": "ipv4", 00:29:48.877 "trsvcid": "4420", 00:29:48.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.878 "hdgst": false, 00:29:48.878 "ddgst": false 00:29:48.878 }, 00:29:48.878 "method": "bdev_nvme_attach_controller" 00:29:48.878 }' 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:48.878 "params": { 00:29:48.878 "name": "Nvme1", 00:29:48.878 "trtype": "tcp", 00:29:48.878 "traddr": "10.0.0.2", 00:29:48.878 "adrfam": "ipv4", 00:29:48.878 "trsvcid": "4420", 00:29:48.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.878 "hdgst": false, 00:29:48.878 "ddgst": false 00:29:48.878 }, 00:29:48.878 "method": "bdev_nvme_attach_controller" 00:29:48.878 }' 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:48.878 "params": { 00:29:48.878 "name": "Nvme1", 00:29:48.878 "trtype": "tcp", 00:29:48.878 "traddr": "10.0.0.2", 00:29:48.878 "adrfam": "ipv4", 00:29:48.878 "trsvcid": "4420", 00:29:48.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.878 "hdgst": false, 00:29:48.878 "ddgst": false 00:29:48.878 }, 00:29:48.878 "method": "bdev_nvme_attach_controller" 00:29:48.878 }' 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:48.878 17:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:48.878 "params": { 00:29:48.878 "name": "Nvme1", 00:29:48.878 "trtype": "tcp", 00:29:48.878 "traddr": "10.0.0.2", 00:29:48.878 "adrfam": "ipv4", 00:29:48.878 "trsvcid": "4420", 00:29:48.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.878 "hdgst": false, 00:29:48.878 "ddgst": false 00:29:48.878 }, 00:29:48.878 "method": "bdev_nvme_attach_controller" 00:29:48.878 }' 00:29:48.878 [2024-11-19 17:47:50.783338] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:48.878 [2024-11-19 17:47:50.783392] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:48.878 [2024-11-19 17:47:50.785072] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:48.878 [2024-11-19 17:47:50.785116] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:48.878 [2024-11-19 17:47:50.788055] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:48.878 [2024-11-19 17:47:50.788056] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:48.878 [2024-11-19 17:47:50.788108] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 17:47:50.788108] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:48.878 --proc-type=auto ] 00:29:48.878 [2024-11-19 17:47:50.980395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.878 [2024-11-19 17:47:51.023828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:48.878 [2024-11-19 17:47:51.074519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.137 [2024-11-19 17:47:51.127049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.137 [2024-11-19 17:47:51.134093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.137 [2024-11-19 17:47:51.177103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:49.137 [2024-11-19 17:47:51.185744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.137 [2024-11-19 17:47:51.228436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:49.137 Running I/O for 1 seconds... 00:29:49.395 Running I/O for 1 seconds... 00:29:49.395 Running I/O for 1 seconds... 00:29:49.395 Running I/O for 1 seconds... 00:29:50.332 15816.00 IOPS, 61.78 MiB/s 00:29:50.332 Latency(us) 00:29:50.332 [2024-11-19T16:47:52.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.332 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:50.332 Nvme1n1 : 1.01 15882.46 62.04 0.00 0.00 8038.23 3433.52 9687.93 00:29:50.332 [2024-11-19T16:47:52.555Z] =================================================================================================================== 00:29:50.332 [2024-11-19T16:47:52.555Z] Total : 15882.46 62.04 0.00 0.00 8038.23 3433.52 9687.93 00:29:50.332 6563.00 IOPS, 25.64 MiB/s 00:29:50.332 Latency(us) 00:29:50.332 [2024-11-19T16:47:52.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.332 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:50.332 Nvme1n1 : 1.01 6614.15 25.84 0.00 0.00 19188.99 1752.38 28721.86 00:29:50.332 [2024-11-19T16:47:52.555Z] =================================================================================================================== 00:29:50.332 [2024-11-19T16:47:52.555Z] Total : 6614.15 25.84 0.00 0.00 19188.99 1752.38 28721.86 00:29:50.332 245696.00 IOPS, 959.75 MiB/s 00:29:50.332 Latency(us) 00:29:50.332 [2024-11-19T16:47:52.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.332 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:50.332 Nvme1n1 : 1.00 245314.69 958.26 0.00 0.00 518.98 229.73 1538.67 00:29:50.332 [2024-11-19T16:47:52.555Z] =================================================================================================================== 00:29:50.332 [2024-11-19T16:47:52.555Z] Total : 245314.69 958.26 0.00 0.00 518.98 229.73 1538.67 00:29:50.332 6987.00 IOPS, 27.29 MiB/s 00:29:50.332 Latency(us) 00:29:50.332 [2024-11-19T16:47:52.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.332 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:50.332 Nvme1n1 : 1.01 7084.47 27.67 0.00 0.00 18020.21 3875.17 37611.97 00:29:50.332 [2024-11-19T16:47:52.555Z] =================================================================================================================== 00:29:50.332 [2024-11-19T16:47:52.555Z] Total : 7084.47 27.67 0.00 0.00 18020.21 3875.17 37611.97 00:29:50.332 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3658707 00:29:50.602 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3658709 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3658712 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.603 rmmod nvme_tcp 00:29:50.603 rmmod nvme_fabrics 00:29:50.603 rmmod nvme_keyring 00:29:50.603 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3658662 ']' 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3658662 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3658662 ']' 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3658662 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3658662 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3658662' 00:29:50.604 killing process with pid 3658662 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3658662 00:29:50.604 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3658662 00:29:50.868 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.868 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.868 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.868 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:50.868 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.869 17:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.775 17:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.035 00:29:53.035 real 0m10.724s 00:29:53.035 user 0m15.307s 00:29:53.035 sys 0m6.394s 00:29:53.035 17:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.035 17:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.035 ************************************ 00:29:53.035 END TEST nvmf_bdev_io_wait 00:29:53.035 ************************************ 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:53.035 ************************************ 00:29:53.035 START TEST nvmf_queue_depth 00:29:53.035 ************************************ 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:53.035 * Looking for test storage... 00:29:53.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.035 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:53.294 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.295 --rc genhtml_branch_coverage=1 00:29:53.295 --rc genhtml_function_coverage=1 00:29:53.295 --rc genhtml_legend=1 00:29:53.295 --rc geninfo_all_blocks=1 00:29:53.295 --rc geninfo_unexecuted_blocks=1 00:29:53.295 00:29:53.295 ' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.295 --rc genhtml_branch_coverage=1 00:29:53.295 --rc genhtml_function_coverage=1 00:29:53.295 --rc genhtml_legend=1 00:29:53.295 --rc geninfo_all_blocks=1 00:29:53.295 --rc geninfo_unexecuted_blocks=1 00:29:53.295 00:29:53.295 ' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.295 --rc genhtml_branch_coverage=1 00:29:53.295 --rc genhtml_function_coverage=1 00:29:53.295 --rc genhtml_legend=1 00:29:53.295 --rc geninfo_all_blocks=1 00:29:53.295 --rc geninfo_unexecuted_blocks=1 00:29:53.295 00:29:53.295 ' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.295 --rc genhtml_branch_coverage=1 00:29:53.295 --rc genhtml_function_coverage=1 00:29:53.295 --rc genhtml_legend=1 00:29:53.295 --rc geninfo_all_blocks=1 00:29:53.295 --rc geninfo_unexecuted_blocks=1 00:29:53.295 00:29:53.295 ' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.295 17:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.866 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:59.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:59.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:59.867 Found net devices under 0000:86:00.0: cvl_0_0 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:59.867 Found net devices under 0000:86:00.1: cvl_0_1 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.867 17:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:29:59.867 00:29:59.867 --- 10.0.0.2 ping statistics --- 00:29:59.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.867 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:29:59.867 00:29:59.867 --- 10.0.0.1 ping statistics --- 00:29:59.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.867 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3662693 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3662693 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3662693 ']' 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.867 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 [2024-11-19 17:48:01.241312] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:59.868 [2024-11-19 17:48:01.242272] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:59.868 [2024-11-19 17:48:01.242308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.868 [2024-11-19 17:48:01.325444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.868 [2024-11-19 17:48:01.366614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.868 [2024-11-19 17:48:01.366652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.868 [2024-11-19 17:48:01.366660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.868 [2024-11-19 17:48:01.366666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.868 [2024-11-19 17:48:01.366672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.868 [2024-11-19 17:48:01.367251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.868 [2024-11-19 17:48:01.434455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:59.868 [2024-11-19 17:48:01.434678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 [2024-11-19 17:48:01.499922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 Malloc0 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 [2024-11-19 17:48:01.575903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3662814 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3662814 /var/tmp/bdevperf.sock 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3662814 ']' 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:59.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 [2024-11-19 17:48:01.626110] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:29:59.868 [2024-11-19 17:48:01.626161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662814 ] 00:29:59.868 [2024-11-19 17:48:01.699757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.868 [2024-11-19 17:48:01.740979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.868 NVMe0n1 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.868 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:59.868 Running I/O for 10 seconds... 00:30:02.186 11270.00 IOPS, 44.02 MiB/s [2024-11-19T16:48:05.348Z] 11762.50 IOPS, 45.95 MiB/s [2024-11-19T16:48:06.284Z] 11934.67 IOPS, 46.62 MiB/s [2024-11-19T16:48:07.223Z] 12020.00 IOPS, 46.95 MiB/s [2024-11-19T16:48:08.157Z] 12062.00 IOPS, 47.12 MiB/s [2024-11-19T16:48:09.095Z] 12101.83 IOPS, 47.27 MiB/s [2024-11-19T16:48:10.487Z] 12126.86 IOPS, 47.37 MiB/s [2024-11-19T16:48:11.423Z] 12150.00 IOPS, 47.46 MiB/s [2024-11-19T16:48:12.360Z] 12169.44 IOPS, 47.54 MiB/s [2024-11-19T16:48:12.360Z] 12184.50 IOPS, 47.60 MiB/s 00:30:10.137 Latency(us) 00:30:10.137 [2024-11-19T16:48:12.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.137 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:10.137 Verification LBA range: start 0x0 length 0x4000 00:30:10.137 NVMe0n1 : 10.05 12221.34 47.74 0.00 0.00 83512.23 11910.46 55164.22 00:30:10.137 [2024-11-19T16:48:12.360Z] =================================================================================================================== 00:30:10.137 [2024-11-19T16:48:12.360Z] Total : 12221.34 47.74 0.00 0.00 83512.23 11910.46 55164.22 00:30:10.137 { 00:30:10.137 "results": [ 00:30:10.137 { 00:30:10.137 "job": "NVMe0n1", 00:30:10.137 "core_mask": "0x1", 00:30:10.137 "workload": "verify", 00:30:10.137 "status": "finished", 00:30:10.137 "verify_range": { 00:30:10.137 "start": 0, 00:30:10.138 "length": 16384 00:30:10.138 }, 00:30:10.138 "queue_depth": 1024, 00:30:10.138 "io_size": 4096, 00:30:10.138 "runtime": 10.052501, 00:30:10.138 "iops": 12221.33675987697, 00:30:10.138 "mibps": 47.73959671826941, 00:30:10.138 "io_failed": 0, 00:30:10.138 "io_timeout": 0, 00:30:10.138 "avg_latency_us": 83512.23281015972, 00:30:10.138 "min_latency_us": 11910.455652173912, 00:30:10.138 "max_latency_us": 55164.215652173916 00:30:10.138 } 00:30:10.138 ], 00:30:10.138 "core_count": 1 00:30:10.138 } 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3662814 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3662814 ']' 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3662814 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3662814 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3662814' 00:30:10.138 killing process with pid 3662814 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3662814 00:30:10.138 Received shutdown signal, test time was about 10.000000 seconds 00:30:10.138 00:30:10.138 Latency(us) 00:30:10.138 [2024-11-19T16:48:12.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.138 [2024-11-19T16:48:12.361Z] =================================================================================================================== 00:30:10.138 [2024-11-19T16:48:12.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3662814 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.138 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.138 rmmod nvme_tcp 00:30:10.397 rmmod nvme_fabrics 00:30:10.397 rmmod nvme_keyring 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3662693 ']' 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3662693 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3662693 ']' 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3662693 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3662693 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3662693' 00:30:10.397 killing process with pid 3662693 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3662693 00:30:10.397 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3662693 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.657 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.563 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.563 00:30:12.563 real 0m19.634s 00:30:12.563 user 0m22.607s 00:30:12.563 sys 0m6.308s 00:30:12.564 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.564 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.564 ************************************ 00:30:12.564 END TEST nvmf_queue_depth 00:30:12.564 ************************************ 00:30:12.564 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:12.564 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:12.564 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.564 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:12.564 ************************************ 00:30:12.564 START TEST nvmf_target_multipath 00:30:12.564 ************************************ 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:12.823 * Looking for test storage... 00:30:12.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.823 --rc genhtml_branch_coverage=1 00:30:12.823 --rc genhtml_function_coverage=1 00:30:12.823 --rc genhtml_legend=1 00:30:12.823 --rc geninfo_all_blocks=1 00:30:12.823 --rc geninfo_unexecuted_blocks=1 00:30:12.823 00:30:12.823 ' 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.823 --rc genhtml_branch_coverage=1 00:30:12.823 --rc genhtml_function_coverage=1 00:30:12.823 --rc genhtml_legend=1 00:30:12.823 --rc geninfo_all_blocks=1 00:30:12.823 --rc geninfo_unexecuted_blocks=1 00:30:12.823 00:30:12.823 ' 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.823 --rc genhtml_branch_coverage=1 00:30:12.823 --rc genhtml_function_coverage=1 00:30:12.823 --rc genhtml_legend=1 00:30:12.823 --rc geninfo_all_blocks=1 00:30:12.823 --rc geninfo_unexecuted_blocks=1 00:30:12.823 00:30:12.823 ' 00:30:12.823 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.824 --rc genhtml_branch_coverage=1 00:30:12.824 --rc genhtml_function_coverage=1 00:30:12.824 --rc genhtml_legend=1 00:30:12.824 --rc geninfo_all_blocks=1 00:30:12.824 --rc geninfo_unexecuted_blocks=1 00:30:12.824 00:30:12.824 ' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:12.824 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:19.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:19.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:19.395 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:19.396 Found net devices under 0000:86:00.0: cvl_0_0 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:19.396 Found net devices under 0000:86:00.1: cvl_0_1 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:19.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:30:19.396 00:30:19.396 --- 10.0.0.2 ping statistics --- 00:30:19.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.396 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:30:19.396 00:30:19.396 --- 10.0.0.1 ping statistics --- 00:30:19.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.396 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:19.396 only one NIC for nvmf test 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:19.396 rmmod nvme_tcp 00:30:19.396 rmmod nvme_fabrics 00:30:19.396 rmmod nvme_keyring 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.396 17:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.305 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.305 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:21.305 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:21.305 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:21.305 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.306 00:30:21.306 real 0m8.303s 00:30:21.306 user 0m1.846s 00:30:21.306 sys 0m4.481s 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:21.306 ************************************ 00:30:21.306 END TEST nvmf_target_multipath 00:30:21.306 ************************************ 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:21.306 ************************************ 00:30:21.306 START TEST nvmf_zcopy 00:30:21.306 ************************************ 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:21.306 * Looking for test storage... 00:30:21.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.306 --rc genhtml_branch_coverage=1 00:30:21.306 --rc genhtml_function_coverage=1 00:30:21.306 --rc genhtml_legend=1 00:30:21.306 --rc geninfo_all_blocks=1 00:30:21.306 --rc geninfo_unexecuted_blocks=1 00:30:21.306 00:30:21.306 ' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.306 --rc genhtml_branch_coverage=1 00:30:21.306 --rc genhtml_function_coverage=1 00:30:21.306 --rc genhtml_legend=1 00:30:21.306 --rc geninfo_all_blocks=1 00:30:21.306 --rc geninfo_unexecuted_blocks=1 00:30:21.306 00:30:21.306 ' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.306 --rc genhtml_branch_coverage=1 00:30:21.306 --rc genhtml_function_coverage=1 00:30:21.306 --rc genhtml_legend=1 00:30:21.306 --rc geninfo_all_blocks=1 00:30:21.306 --rc geninfo_unexecuted_blocks=1 00:30:21.306 00:30:21.306 ' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.306 --rc genhtml_branch_coverage=1 00:30:21.306 --rc genhtml_function_coverage=1 00:30:21.306 --rc genhtml_legend=1 00:30:21.306 --rc geninfo_all_blocks=1 00:30:21.306 --rc geninfo_unexecuted_blocks=1 00:30:21.306 00:30:21.306 ' 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:21.306 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.307 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:27.881 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:27.881 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:27.881 Found net devices under 0000:86:00.0: cvl_0_0 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.881 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:27.882 Found net devices under 0000:86:00.1: cvl_0_1 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.882 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:27.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:30:27.882 00:30:27.882 --- 10.0.0.2 ping statistics --- 00:30:27.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.882 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:30:27.882 00:30:27.882 --- 10.0.0.1 ping statistics --- 00:30:27.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.882 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3671857 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3671857 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3671857 ']' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.882 [2024-11-19 17:48:29.275610] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:27.882 [2024-11-19 17:48:29.276522] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:30:27.882 [2024-11-19 17:48:29.276555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.882 [2024-11-19 17:48:29.356263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.882 [2024-11-19 17:48:29.395000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.882 [2024-11-19 17:48:29.395035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.882 [2024-11-19 17:48:29.395042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.882 [2024-11-19 17:48:29.395048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.882 [2024-11-19 17:48:29.395054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.882 [2024-11-19 17:48:29.395598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.882 [2024-11-19 17:48:29.462250] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:27.882 [2024-11-19 17:48:29.462466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.882 [2024-11-19 17:48:29.536341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.882 [2024-11-19 17:48:29.564584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.882 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.883 malloc0 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:27.883 { 00:30:27.883 "params": { 00:30:27.883 "name": "Nvme$subsystem", 00:30:27.883 "trtype": "$TEST_TRANSPORT", 00:30:27.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.883 "adrfam": "ipv4", 00:30:27.883 "trsvcid": "$NVMF_PORT", 00:30:27.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.883 "hdgst": ${hdgst:-false}, 00:30:27.883 "ddgst": ${ddgst:-false} 00:30:27.883 }, 00:30:27.883 "method": "bdev_nvme_attach_controller" 00:30:27.883 } 00:30:27.883 EOF 00:30:27.883 )") 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:27.883 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:27.883 "params": { 00:30:27.883 "name": "Nvme1", 00:30:27.883 "trtype": "tcp", 00:30:27.883 "traddr": "10.0.0.2", 00:30:27.883 "adrfam": "ipv4", 00:30:27.883 "trsvcid": "4420", 00:30:27.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:27.883 "hdgst": false, 00:30:27.883 "ddgst": false 00:30:27.883 }, 00:30:27.883 "method": "bdev_nvme_attach_controller" 00:30:27.883 }' 00:30:27.883 [2024-11-19 17:48:29.656477] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:30:27.883 [2024-11-19 17:48:29.656523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671878 ] 00:30:27.883 [2024-11-19 17:48:29.733023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.883 [2024-11-19 17:48:29.774877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.883 Running I/O for 10 seconds... 00:30:30.201 8104.00 IOPS, 63.31 MiB/s [2024-11-19T16:48:33.359Z] 8196.00 IOPS, 64.03 MiB/s [2024-11-19T16:48:34.297Z] 8268.00 IOPS, 64.59 MiB/s [2024-11-19T16:48:35.234Z] 8273.75 IOPS, 64.64 MiB/s [2024-11-19T16:48:36.171Z] 8290.00 IOPS, 64.77 MiB/s [2024-11-19T16:48:37.107Z] 8307.67 IOPS, 64.90 MiB/s [2024-11-19T16:48:38.045Z] 8317.71 IOPS, 64.98 MiB/s [2024-11-19T16:48:39.422Z] 8323.25 IOPS, 65.03 MiB/s [2024-11-19T16:48:40.358Z] 8326.11 IOPS, 65.05 MiB/s [2024-11-19T16:48:40.359Z] 8329.20 IOPS, 65.07 MiB/s 00:30:38.136 Latency(us) 00:30:38.136 [2024-11-19T16:48:40.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.136 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:38.136 Verification LBA range: start 0x0 length 0x1000 00:30:38.136 Nvme1n1 : 10.01 8333.80 65.11 0.00 0.00 15315.77 2521.71 23592.96 00:30:38.136 [2024-11-19T16:48:40.359Z] =================================================================================================================== 00:30:38.136 [2024-11-19T16:48:40.359Z] Total : 8333.80 65.11 0.00 0.00 15315.77 2521.71 23592.96 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3673495 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:38.136 { 00:30:38.136 "params": { 00:30:38.136 "name": "Nvme$subsystem", 00:30:38.136 "trtype": "$TEST_TRANSPORT", 00:30:38.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.136 "adrfam": "ipv4", 00:30:38.136 "trsvcid": "$NVMF_PORT", 00:30:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.136 "hdgst": ${hdgst:-false}, 00:30:38.136 "ddgst": ${ddgst:-false} 00:30:38.136 }, 00:30:38.136 "method": "bdev_nvme_attach_controller" 00:30:38.136 } 00:30:38.136 EOF 00:30:38.136 )") 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:38.136 [2024-11-19 17:48:40.207943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.207985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:38.136 [2024-11-19 17:48:40.215900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.215913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:38.136 "params": { 00:30:38.136 "name": "Nvme1", 00:30:38.136 "trtype": "tcp", 00:30:38.136 "traddr": "10.0.0.2", 00:30:38.136 "adrfam": "ipv4", 00:30:38.136 "trsvcid": "4420", 00:30:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:38.136 "hdgst": false, 00:30:38.136 "ddgst": false 00:30:38.136 }, 00:30:38.136 "method": "bdev_nvme_attach_controller" 00:30:38.136 }' 00:30:38.136 [2024-11-19 17:48:40.223893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.223905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.231895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.231906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.239895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.239905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.247339] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:30:38.136 [2024-11-19 17:48:40.247382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673495 ] 00:30:38.136 [2024-11-19 17:48:40.251895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.251906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.263895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.263905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.275896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.275906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.287895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.287906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.299896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.299906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.311892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.311903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.321992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.136 [2024-11-19 17:48:40.323897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.323907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.335901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.335922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.136 [2024-11-19 17:48:40.347896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.136 [2024-11-19 17:48:40.347906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.359896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.359914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.364126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.395 [2024-11-19 17:48:40.371896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.371909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.383910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.383932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.395905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.395924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.407902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.407918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.419904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.419921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.431907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.431924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.443896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.395 [2024-11-19 17:48:40.443906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.395 [2024-11-19 17:48:40.455913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.455936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.467934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.467955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.479909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.479927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.491900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.491915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.503898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.503909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.515895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.515905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.527898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.527911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.539897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.539910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.551895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.551906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.563894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.563904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.575901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.575919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.587895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.587906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.599896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.599907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.396 [2024-11-19 17:48:40.611894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.396 [2024-11-19 17:48:40.611904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.623904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.623923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.635900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.635912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 Running I/O for 5 seconds... 00:30:38.654 [2024-11-19 17:48:40.651444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.651465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.665792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.665811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.681691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.681711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.697350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.697370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.712447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.712467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.727666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.727686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.738763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.738782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.754183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.754202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.769612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.769631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.784775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.784795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.800359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.800377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.812369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.812388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.825434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.825453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.840917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.840942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.855821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.855842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.654 [2024-11-19 17:48:40.869662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.654 [2024-11-19 17:48:40.869680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.913 [2024-11-19 17:48:40.885044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.913 [2024-11-19 17:48:40.885063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.913 [2024-11-19 17:48:40.895841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.913 [2024-11-19 17:48:40.895860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.913 [2024-11-19 17:48:40.910392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.913 [2024-11-19 17:48:40.910411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.913 [2024-11-19 17:48:40.925857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:40.925876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:40.941271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:40.941290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:40.956396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:40.956414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:40.972089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:40.972112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:40.983930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:40.983953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:40.997821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:40.997840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.013609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.013629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.029084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.029103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.044691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.044714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.060355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.060380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.072019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.072038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.085789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.085808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.100639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.100657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.115574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.115598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.914 [2024-11-19 17:48:41.128013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.914 [2024-11-19 17:48:41.128032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.135018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.135038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.147487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.147506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.162499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.162519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.176862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.176884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.188378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.188397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.201957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.201975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.217278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.217297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.232234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.232253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.244749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.244768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.257716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.257735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.273466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.273485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.288380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.288400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.300280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.300298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.313202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.313220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.324014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.324032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.338374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.338393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.353811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.353829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.368764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.368782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.176 [2024-11-19 17:48:41.383692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.176 [2024-11-19 17:48:41.383711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.397924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.397943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.413268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.413287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.428484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.428503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.439952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.439987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.454054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.454073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.469304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.469323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.483838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.483857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.495138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.495157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.510077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.510096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.525411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.525431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.535777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.535796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.550327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.550346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.565665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.565684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.581133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.581151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.596112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.596131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.606792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.606811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.622047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.622067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 [2024-11-19 17:48:41.636670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.636690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.436 16200.00 IOPS, 126.56 MiB/s [2024-11-19T16:48:41.659Z] [2024-11-19 17:48:41.651294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.436 [2024-11-19 17:48:41.651314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.664836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.664860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.676213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.676231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.690021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.690040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.705438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.705458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.720419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.720439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.735793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.735813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.748300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.748319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.761938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.761965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.777383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.777403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.792489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.792508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.807372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.807392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.821862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.821881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.837127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.837153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.847110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.847129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.861652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.861676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.876905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.876924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.887644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.887664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.696 [2024-11-19 17:48:41.901756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.696 [2024-11-19 17:48:41.901777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:41.917307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:41.917328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:41.932685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:41.932705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:41.947770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:41.947789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:41.960855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:41.960874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:41.972681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:41.972701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:41.985704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:41.985724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.000941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.000966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.016036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.016055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.027759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.027779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.042308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.042328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.057499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.057518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.072843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.072861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.088053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.088072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.099874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.099892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.113888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.113906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.129458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.129477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.144332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.144351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.160052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.160076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.956 [2024-11-19 17:48:42.173186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.956 [2024-11-19 17:48:42.173205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.183676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.183696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.198378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.198397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.213800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.213818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.228608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.228626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.243795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.243814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.256461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.256479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.269410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.269428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.280056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.280074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.293494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.293512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.309033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.309052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.324421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.324439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.340692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.340710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.351488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.351507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.366142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.366161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.381174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.381192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.396059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.396077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.407682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.407701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.215 [2024-11-19 17:48:42.422373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.215 [2024-11-19 17:48:42.422395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.474 [2024-11-19 17:48:42.437742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.474 [2024-11-19 17:48:42.437762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.474 [2024-11-19 17:48:42.452999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.474 [2024-11-19 17:48:42.453018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.474 [2024-11-19 17:48:42.463241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.474 [2024-11-19 17:48:42.463260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.474 [2024-11-19 17:48:42.477776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.477796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.493052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.493070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.503243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.503262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.518102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.518122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.533436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.533455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.548926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.548945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.564274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.564293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.580252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.580270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.592928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.592952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.604509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.604527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.617666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.617685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.633099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.633119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.643514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.643533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 16192.50 IOPS, 126.50 MiB/s [2024-11-19T16:48:42.698Z] [2024-11-19 17:48:42.657196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.657214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.668370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.668387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.475 [2024-11-19 17:48:42.681763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.475 [2024-11-19 17:48:42.681787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.697298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.697318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.712069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.712089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.723698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.723716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.737844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.737862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.753457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.753475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.768532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.768551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.784115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.784133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.798076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.798095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.813258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.813276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.828397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.828415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.844698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.734 [2024-11-19 17:48:42.844716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.734 [2024-11-19 17:48:42.859907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.859926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.735 [2024-11-19 17:48:42.870788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.870809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.735 [2024-11-19 17:48:42.886041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.886060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.735 [2024-11-19 17:48:42.901197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.901216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.735 [2024-11-19 17:48:42.916011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.916031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.735 [2024-11-19 17:48:42.928560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.928578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.735 [2024-11-19 17:48:42.941460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.735 [2024-11-19 17:48:42.941479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:42.957491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:42.957510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:42.973042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:42.973060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:42.987684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:42.987703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.001449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.001468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.011595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.011614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.025920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.025940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.041413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.041431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.056225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.056245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.071162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.071183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.083668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.083689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.097955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.097975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.113570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.113591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.128452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.128471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.144135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.144154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.155096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.155116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.170122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.170143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.185100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.185119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.195694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.195714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.994 [2024-11-19 17:48:43.209945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.994 [2024-11-19 17:48:43.209971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.253 [2024-11-19 17:48:43.225230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.253 [2024-11-19 17:48:43.225250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.253 [2024-11-19 17:48:43.240716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.253 [2024-11-19 17:48:43.240736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.253 [2024-11-19 17:48:43.251590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.253 [2024-11-19 17:48:43.251609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.253 [2024-11-19 17:48:43.265996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.253 [2024-11-19 17:48:43.266015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.281347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.281367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.296826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.296847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.311704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.311723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.324972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.324992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.337544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.337563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.353059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.353078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.368347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.368366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.379155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.379174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.395010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.395030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.408196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.408214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.421609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.421630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.436338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.436357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.447062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.447082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.254 [2024-11-19 17:48:43.461902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.254 [2024-11-19 17:48:43.461921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.477328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.477347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.492590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.492610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.507445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.507464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.520324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.520342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.533409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.533427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.548815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.548834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.563850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.563868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.578442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.578461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.513 [2024-11-19 17:48:43.593598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.513 [2024-11-19 17:48:43.593616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.609052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.609072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.619845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.619864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.633894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.633913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 16224.33 IOPS, 126.75 MiB/s [2024-11-19T16:48:43.737Z] [2024-11-19 17:48:43.648811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.648829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.665274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.665293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.680928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.680952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.696280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.696299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.710979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.710998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.514 [2024-11-19 17:48:43.725705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.514 [2024-11-19 17:48:43.725724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.774 [2024-11-19 17:48:43.740683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.774 [2024-11-19 17:48:43.740703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.774 [2024-11-19 17:48:43.751143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.774 [2024-11-19 17:48:43.751166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.774 [2024-11-19 17:48:43.765648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.774 [2024-11-19 17:48:43.765666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.774 [2024-11-19 17:48:43.781039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.774 [2024-11-19 17:48:43.781058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.796513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.796532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.806893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.806913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.822138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.822157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.837330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.837349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.852336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.852354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.863724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.863742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.878364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.878383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.893059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.893077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.905100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.905119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.919975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.919995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.930494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.930513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.946015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.946035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.961276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.961294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.976625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.976644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.775 [2024-11-19 17:48:43.992010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.775 [2024-11-19 17:48:43.992028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.002840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.002860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.017527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.017550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.032532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.032550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.048145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.048164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.061075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.061093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.071786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.071804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.086097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.086116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.101456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.101475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.116476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.116495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.131866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.131885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.145750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.145769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.161045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.161064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.172224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.172243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.185711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.185729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.200988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.201006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.212052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.212070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.225649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.225668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.035 [2024-11-19 17:48:44.240757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.035 [2024-11-19 17:48:44.240776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.294 [2024-11-19 17:48:44.256649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.294 [2024-11-19 17:48:44.256669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.271811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.271836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.284737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.284760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.297710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.297730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.313039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.313058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.328170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.328189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.338984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.339003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.353552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.353572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.368576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.368595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.380643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.380661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.393891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.393909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.408904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.408922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.424285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.424303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.436276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.436295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.450291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.450310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.465516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.465535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.480471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.480489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.492278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.492296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.295 [2024-11-19 17:48:44.505790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.295 [2024-11-19 17:48:44.505809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.521120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.521140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.536407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.536427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.547424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.547452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.562255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.562276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.577676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.577695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.593037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.593056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.608237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.608257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.625001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.625027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.640402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.640421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 16223.00 IOPS, 126.74 MiB/s [2024-11-19T16:48:44.778Z] [2024-11-19 17:48:44.651975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.651995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.666136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.666155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.681492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.681510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.696636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.696655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.712437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.712455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.724742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.724761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.739540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.739559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.752142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.752164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.555 [2024-11-19 17:48:44.766345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.555 [2024-11-19 17:48:44.766365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.814 [2024-11-19 17:48:44.781717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.814 [2024-11-19 17:48:44.781738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.814 [2024-11-19 17:48:44.797361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.814 [2024-11-19 17:48:44.797387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.814 [2024-11-19 17:48:44.812777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.812797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.822894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.822913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.838166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.838185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.853875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.853895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.869377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.869396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.879676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.879695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.894231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.894250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.909658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.909678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.924635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.924655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.936316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.936336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.949442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.949465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.965076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.965095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.980565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.980584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:44.992485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:44.992503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:45.005685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:45.005705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:45.021398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:45.021419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.815 [2024-11-19 17:48:45.031657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.815 [2024-11-19 17:48:45.031675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.046455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.046475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.061520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.061539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.076750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.076769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.091911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.091930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.104507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.104526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.117702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.117723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.132779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.074 [2024-11-19 17:48:45.132798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.074 [2024-11-19 17:48:45.148281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.148300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.161443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.161461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.177046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.177065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.192250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.192268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.203787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.203805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.217723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.217741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.232988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.233007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.248618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.248637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.263860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.263880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.276523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.276541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.075 [2024-11-19 17:48:45.289817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.075 [2024-11-19 17:48:45.289836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.305679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.305699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.320781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.320800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.335787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.335806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.346875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.346898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.362389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.362408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.377151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.377169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.388014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.388033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.402180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.402198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.417685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.417703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.432755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.432773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.443448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.443468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.334 [2024-11-19 17:48:45.458425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.334 [2024-11-19 17:48:45.458445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.335 [2024-11-19 17:48:45.473296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.335 [2024-11-19 17:48:45.473315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.335 [2024-11-19 17:48:45.483891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.335 [2024-11-19 17:48:45.483909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.335 [2024-11-19 17:48:45.497784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.335 [2024-11-19 17:48:45.497802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.335 [2024-11-19 17:48:45.513021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.335 [2024-11-19 17:48:45.513039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.335 [2024-11-19 17:48:45.527793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.335 [2024-11-19 17:48:45.527811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.335 [2024-11-19 17:48:45.541713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.335 [2024-11-19 17:48:45.541733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.557073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.557093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.572306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.572324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.583585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.583603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.598010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.598029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.613286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.613309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.623727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.623746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.637778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.637797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 16216.40 IOPS, 126.69 MiB/s [2024-11-19T16:48:45.817Z] [2024-11-19 17:48:45.653152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.653171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 00:30:43.594 Latency(us) 00:30:43.594 [2024-11-19T16:48:45.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.594 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:43.594 Nvme1n1 : 5.01 16219.51 126.71 0.00 0.00 7884.09 2094.30 14019.01 00:30:43.594 [2024-11-19T16:48:45.817Z] =================================================================================================================== 00:30:43.594 [2024-11-19T16:48:45.817Z] Total : 16219.51 126.71 0.00 0.00 7884.09 2094.30 14019.01 00:30:43.594 [2024-11-19 17:48:45.663903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.663922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.675901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.675916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.687919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.687939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.699909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.699928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.711909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.711927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.723904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.594 [2024-11-19 17:48:45.723921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.594 [2024-11-19 17:48:45.735902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.735919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.595 [2024-11-19 17:48:45.747904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.747922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.595 [2024-11-19 17:48:45.759902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.759918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.595 [2024-11-19 17:48:45.771898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.771909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.595 [2024-11-19 17:48:45.783924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.783940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.595 [2024-11-19 17:48:45.795907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.795924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.595 [2024-11-19 17:48:45.807912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.595 [2024-11-19 17:48:45.807928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.855 [2024-11-19 17:48:45.819896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.855 [2024-11-19 17:48:45.819907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3673495) - No such process 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3673495 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.855 delay0 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.855 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:43.855 [2024-11-19 17:48:45.968810] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:51.993 Initializing NVMe Controllers 00:30:51.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.993 Initialization complete. Launching workers. 00:30:51.993 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5881 00:30:51.993 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6158, failed to submit 43 00:30:51.993 success 5995, unsuccessful 163, failed 0 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.993 rmmod nvme_tcp 00:30:51.993 rmmod nvme_fabrics 00:30:51.993 rmmod nvme_keyring 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3671857 ']' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3671857 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3671857 ']' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3671857 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671857 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671857' 00:30:51.993 killing process with pid 3671857 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3671857 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3671857 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.993 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.375 00:30:53.375 real 0m32.248s 00:30:53.375 user 0m41.932s 00:30:53.375 sys 0m12.658s 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:53.375 ************************************ 00:30:53.375 END TEST nvmf_zcopy 00:30:53.375 ************************************ 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:53.375 ************************************ 00:30:53.375 START TEST nvmf_nmic 00:30:53.375 ************************************ 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:53.375 * Looking for test storage... 00:30:53.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:30:53.375 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:53.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.636 --rc genhtml_branch_coverage=1 00:30:53.636 --rc genhtml_function_coverage=1 00:30:53.636 --rc genhtml_legend=1 00:30:53.636 --rc geninfo_all_blocks=1 00:30:53.636 --rc geninfo_unexecuted_blocks=1 00:30:53.636 00:30:53.636 ' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:53.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.636 --rc genhtml_branch_coverage=1 00:30:53.636 --rc genhtml_function_coverage=1 00:30:53.636 --rc genhtml_legend=1 00:30:53.636 --rc geninfo_all_blocks=1 00:30:53.636 --rc geninfo_unexecuted_blocks=1 00:30:53.636 00:30:53.636 ' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:53.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.636 --rc genhtml_branch_coverage=1 00:30:53.636 --rc genhtml_function_coverage=1 00:30:53.636 --rc genhtml_legend=1 00:30:53.636 --rc geninfo_all_blocks=1 00:30:53.636 --rc geninfo_unexecuted_blocks=1 00:30:53.636 00:30:53.636 ' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:53.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.636 --rc genhtml_branch_coverage=1 00:30:53.636 --rc genhtml_function_coverage=1 00:30:53.636 --rc genhtml_legend=1 00:30:53.636 --rc geninfo_all_blocks=1 00:30:53.636 --rc geninfo_unexecuted_blocks=1 00:30:53.636 00:30:53.636 ' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.636 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:53.637 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:00.214 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:00.214 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.214 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:00.215 Found net devices under 0000:86:00.0: cvl_0_0 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:00.215 Found net devices under 0000:86:00.1: cvl_0_1 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:31:00.215 00:31:00.215 --- 10.0.0.2 ping statistics --- 00:31:00.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.215 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:00.215 00:31:00.215 --- 10.0.0.1 ping statistics --- 00:31:00.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.215 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3679063 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3679063 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3679063 ']' 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.215 [2024-11-19 17:49:01.614179] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:00.215 [2024-11-19 17:49:01.615119] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:31:00.215 [2024-11-19 17:49:01.615153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.215 [2024-11-19 17:49:01.680329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.215 [2024-11-19 17:49:01.725427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.215 [2024-11-19 17:49:01.725465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.215 [2024-11-19 17:49:01.725473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.215 [2024-11-19 17:49:01.725480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.215 [2024-11-19 17:49:01.725485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.215 [2024-11-19 17:49:01.727966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.215 [2024-11-19 17:49:01.728005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.215 [2024-11-19 17:49:01.728111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.215 [2024-11-19 17:49:01.728112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.215 [2024-11-19 17:49:01.795918] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:00.215 [2024-11-19 17:49:01.796613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:00.215 [2024-11-19 17:49:01.796905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:00.215 [2024-11-19 17:49:01.797223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:00.215 [2024-11-19 17:49:01.797326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.215 [2024-11-19 17:49:01.876826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.215 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 Malloc0 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 [2024-11-19 17:49:01.961041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:00.216 test case1: single bdev can't be used in multiple subsystems 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 [2024-11-19 17:49:01.992538] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:00.216 [2024-11-19 17:49:01.992565] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:00.216 [2024-11-19 17:49:01.992573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.216 request: 00:31:00.216 { 00:31:00.216 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:00.216 "namespace": { 00:31:00.216 "bdev_name": "Malloc0", 00:31:00.216 "no_auto_visible": false 00:31:00.216 }, 00:31:00.216 "method": "nvmf_subsystem_add_ns", 00:31:00.216 "req_id": 1 00:31:00.216 } 00:31:00.216 Got JSON-RPC error response 00:31:00.216 response: 00:31:00.216 { 00:31:00.216 "code": -32602, 00:31:00.216 "message": "Invalid parameters" 00:31:00.216 } 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:00.216 Adding namespace failed - expected result. 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:00.216 test case2: host connect to nvmf target in multiple paths 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.216 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:00.216 [2024-11-19 17:49:02.004651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:00.216 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.216 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:00.216 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:00.476 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:00.476 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:00.476 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:00.476 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:00.476 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:02.384 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:02.384 [global] 00:31:02.384 thread=1 00:31:02.384 invalidate=1 00:31:02.384 rw=write 00:31:02.384 time_based=1 00:31:02.384 runtime=1 00:31:02.384 ioengine=libaio 00:31:02.384 direct=1 00:31:02.384 bs=4096 00:31:02.384 iodepth=1 00:31:02.384 norandommap=0 00:31:02.384 numjobs=1 00:31:02.384 00:31:02.384 verify_dump=1 00:31:02.384 verify_backlog=512 00:31:02.384 verify_state_save=0 00:31:02.384 do_verify=1 00:31:02.384 verify=crc32c-intel 00:31:02.384 [job0] 00:31:02.384 filename=/dev/nvme0n1 00:31:02.384 Could not set queue depth (nvme0n1) 00:31:02.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:02.643 fio-3.35 00:31:02.643 Starting 1 thread 00:31:04.023 00:31:04.023 job0: (groupid=0, jobs=1): err= 0: pid=3679690: Tue Nov 19 17:49:05 2024 00:31:04.023 read: IOPS=349, BW=1399KiB/s (1433kB/s)(1412KiB/1009msec) 00:31:04.023 slat (nsec): min=7016, max=26277, avg=8936.09, stdev=3644.89 00:31:04.023 clat (usec): min=182, max=41111, avg=2406.10, stdev=9208.52 00:31:04.023 lat (usec): min=189, max=41133, avg=2415.03, stdev=9211.62 00:31:04.023 clat percentiles (usec): 00:31:04.023 | 1.00th=[ 186], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:31:04.023 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:31:04.023 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 265], 95.00th=[40633], 00:31:04.023 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:04.023 | 99.99th=[41157] 00:31:04.023 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:31:04.023 slat (usec): min=10, max=26622, avg=64.45, stdev=1176.01 00:31:04.023 clat (usec): min=133, max=306, avg=233.94, stdev=23.21 00:31:04.023 lat (usec): min=146, max=26888, avg=298.40, stdev=1177.67 00:31:04.023 clat percentiles (usec): 00:31:04.023 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 233], 20.00th=[ 237], 00:31:04.023 | 30.00th=[ 239], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:31:04.023 | 70.00th=[ 241], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:31:04.023 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 306], 99.95th=[ 306], 00:31:04.023 | 99.99th=[ 306] 00:31:04.023 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:04.023 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:04.023 lat (usec) : 250=92.02%, 500=5.66% 00:31:04.023 lat (msec) : 10=0.12%, 50=2.20% 00:31:04.023 cpu : usr=0.89%, sys=1.19%, ctx=867, majf=0, minf=1 00:31:04.023 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:04.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.023 issued rwts: total=353,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.023 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:04.023 00:31:04.023 Run status group 0 (all jobs): 00:31:04.023 READ: bw=1399KiB/s (1433kB/s), 1399KiB/s-1399KiB/s (1433kB/s-1433kB/s), io=1412KiB (1446kB), run=1009-1009msec 00:31:04.023 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:31:04.023 00:31:04.023 Disk stats (read/write): 00:31:04.023 nvme0n1: ios=377/512, merge=0/0, ticks=1714/113, in_queue=1827, util=98.50% 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:04.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.023 rmmod nvme_tcp 00:31:04.023 rmmod nvme_fabrics 00:31:04.023 rmmod nvme_keyring 00:31:04.023 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3679063 ']' 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3679063 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3679063 ']' 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3679063 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3679063 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3679063' 00:31:04.283 killing process with pid 3679063 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3679063 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3679063 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.283 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.543 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.599 00:31:06.599 real 0m13.082s 00:31:06.599 user 0m24.068s 00:31:06.599 sys 0m5.997s 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.599 ************************************ 00:31:06.599 END TEST nvmf_nmic 00:31:06.599 ************************************ 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:06.599 ************************************ 00:31:06.599 START TEST nvmf_fio_target 00:31:06.599 ************************************ 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:06.599 * Looking for test storage... 00:31:06.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:06.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.599 --rc genhtml_branch_coverage=1 00:31:06.599 --rc genhtml_function_coverage=1 00:31:06.599 --rc genhtml_legend=1 00:31:06.599 --rc geninfo_all_blocks=1 00:31:06.599 --rc geninfo_unexecuted_blocks=1 00:31:06.599 00:31:06.599 ' 00:31:06.599 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:06.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.599 --rc genhtml_branch_coverage=1 00:31:06.599 --rc genhtml_function_coverage=1 00:31:06.599 --rc genhtml_legend=1 00:31:06.599 --rc geninfo_all_blocks=1 00:31:06.599 --rc geninfo_unexecuted_blocks=1 00:31:06.599 00:31:06.599 ' 00:31:06.600 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.600 --rc genhtml_branch_coverage=1 00:31:06.600 --rc genhtml_function_coverage=1 00:31:06.600 --rc genhtml_legend=1 00:31:06.600 --rc geninfo_all_blocks=1 00:31:06.600 --rc geninfo_unexecuted_blocks=1 00:31:06.600 00:31:06.600 ' 00:31:06.600 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.600 --rc genhtml_branch_coverage=1 00:31:06.600 --rc genhtml_function_coverage=1 00:31:06.600 --rc genhtml_legend=1 00:31:06.600 --rc geninfo_all_blocks=1 00:31:06.600 --rc geninfo_unexecuted_blocks=1 00:31:06.600 00:31:06.600 ' 00:31:06.600 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.600 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.879 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.880 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:13.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:13.453 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:13.453 Found net devices under 0000:86:00.0: cvl_0_0 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:13.453 Found net devices under 0000:86:00.1: cvl_0_1 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.453 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:31:13.454 00:31:13.454 --- 10.0.0.2 ping statistics --- 00:31:13.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.454 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:31:13.454 00:31:13.454 --- 10.0.0.1 ping statistics --- 00:31:13.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.454 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3683433 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3683433 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3683433 ']' 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.454 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.454 [2024-11-19 17:49:14.798632] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:13.454 [2024-11-19 17:49:14.799602] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:31:13.454 [2024-11-19 17:49:14.799635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.454 [2024-11-19 17:49:14.880573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.454 [2024-11-19 17:49:14.926174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.454 [2024-11-19 17:49:14.926208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.454 [2024-11-19 17:49:14.926216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.454 [2024-11-19 17:49:14.926222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.454 [2024-11-19 17:49:14.926228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.454 [2024-11-19 17:49:14.930966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.454 [2024-11-19 17:49:14.931021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.454 [2024-11-19 17:49:14.931022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.454 [2024-11-19 17:49:14.930991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.454 [2024-11-19 17:49:14.999450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:13.454 [2024-11-19 17:49:15.000115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:13.454 [2024-11-19 17:49:15.000178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:13.454 [2024-11-19 17:49:15.000649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:13.454 [2024-11-19 17:49:15.000734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:13.454 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.454 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:13.454 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:13.454 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:13.454 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.714 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.714 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:13.714 [2024-11-19 17:49:15.847802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.714 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:13.973 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:13.973 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.232 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:14.232 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.492 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:14.492 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.751 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:14.751 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:14.751 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:15.011 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:15.011 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:15.270 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:15.270 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:15.529 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:15.529 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:15.788 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:15.788 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:15.788 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.047 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:16.047 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:16.307 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.566 [2024-11-19 17:49:18.563752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.566 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:16.825 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:16.825 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:17.084 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:17.084 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:17.084 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:17.084 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:17.084 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:17.084 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:19.620 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:19.620 [global] 00:31:19.620 thread=1 00:31:19.620 invalidate=1 00:31:19.620 rw=write 00:31:19.620 time_based=1 00:31:19.620 runtime=1 00:31:19.620 ioengine=libaio 00:31:19.620 direct=1 00:31:19.620 bs=4096 00:31:19.620 iodepth=1 00:31:19.620 norandommap=0 00:31:19.620 numjobs=1 00:31:19.620 00:31:19.620 verify_dump=1 00:31:19.620 verify_backlog=512 00:31:19.620 verify_state_save=0 00:31:19.620 do_verify=1 00:31:19.620 verify=crc32c-intel 00:31:19.620 [job0] 00:31:19.620 filename=/dev/nvme0n1 00:31:19.620 [job1] 00:31:19.620 filename=/dev/nvme0n2 00:31:19.620 [job2] 00:31:19.620 filename=/dev/nvme0n3 00:31:19.620 [job3] 00:31:19.620 filename=/dev/nvme0n4 00:31:19.620 Could not set queue depth (nvme0n1) 00:31:19.620 Could not set queue depth (nvme0n2) 00:31:19.620 Could not set queue depth (nvme0n3) 00:31:19.620 Could not set queue depth (nvme0n4) 00:31:19.620 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:19.620 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:19.620 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:19.620 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:19.620 fio-3.35 00:31:19.620 Starting 4 threads 00:31:21.031 00:31:21.031 job0: (groupid=0, jobs=1): err= 0: pid=3684770: Tue Nov 19 17:49:22 2024 00:31:21.031 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:31:21.031 slat (usec): min=4, max=115, avg= 7.25, stdev= 4.37 00:31:21.031 clat (usec): min=107, max=41198, avg=766.08, stdev=4639.33 00:31:21.031 lat (usec): min=186, max=41206, avg=773.33, stdev=4640.26 00:31:21.031 clat percentiles (usec): 00:31:21.031 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:31:21.031 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:31:21.031 | 70.00th=[ 208], 80.00th=[ 245], 90.00th=[ 281], 95.00th=[ 441], 00:31:21.031 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:21.031 | 99.99th=[41157] 00:31:21.031 write: IOPS=1127, BW=4511KiB/s (4620kB/s)(4516KiB/1001msec); 0 zone resets 00:31:21.031 slat (usec): min=9, max=15020, avg=24.17, stdev=446.70 00:31:21.031 clat (usec): min=119, max=288, avg=156.23, stdev=24.59 00:31:21.031 lat (usec): min=132, max=15308, avg=180.40, stdev=451.30 00:31:21.031 clat percentiles (usec): 00:31:21.031 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:21.031 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 151], 60.00th=[ 163], 00:31:21.031 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:31:21.031 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 255], 99.95th=[ 289], 00:31:21.031 | 99.99th=[ 289] 00:31:21.031 bw ( KiB/s): min= 4087, max= 4087, per=30.47%, avg=4087.00, stdev= 0.00, samples=1 00:31:21.031 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:21.031 lat (usec) : 250=92.01%, 500=6.64%, 750=0.70% 00:31:21.031 lat (msec) : 50=0.65% 00:31:21.031 cpu : usr=0.60%, sys=2.40%, ctx=2159, majf=0, minf=1 00:31:21.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.031 issued rwts: total=1024,1129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.031 job1: (groupid=0, jobs=1): err= 0: pid=3684772: Tue Nov 19 17:49:22 2024 00:31:21.031 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:21.031 slat (nsec): min=4835, max=26377, avg=8894.41, stdev=3533.27 00:31:21.031 clat (usec): min=169, max=41137, avg=1641.76, stdev=7515.08 00:31:21.031 lat (usec): min=176, max=41148, avg=1650.66, stdev=7516.58 00:31:21.031 clat percentiles (usec): 00:31:21.031 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:31:21.031 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:31:21.031 | 70.00th=[ 202], 80.00th=[ 217], 90.00th=[ 269], 95.00th=[ 424], 00:31:21.031 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:21.031 | 99.99th=[41157] 00:31:21.031 write: IOPS=811, BW=3245KiB/s (3323kB/s)(3248KiB/1001msec); 0 zone resets 00:31:21.031 slat (usec): min=9, max=190, avg=12.81, stdev= 7.41 00:31:21.031 clat (usec): min=2, max=313, avg=173.00, stdev=21.08 00:31:21.031 lat (usec): min=146, max=373, avg=185.82, stdev=21.08 00:31:21.031 clat percentiles (usec): 00:31:21.031 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:31:21.031 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:31:21.032 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 208], 00:31:21.032 | 99.00th=[ 233], 99.50th=[ 239], 99.90th=[ 314], 99.95th=[ 314], 00:31:21.032 | 99.99th=[ 314] 00:31:21.032 bw ( KiB/s): min= 4087, max= 4087, per=30.47%, avg=4087.00, stdev= 0.00, samples=1 00:31:21.032 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:21.032 lat (usec) : 4=0.08%, 250=95.39%, 500=2.87%, 750=0.23% 00:31:21.032 lat (msec) : 2=0.08%, 50=1.36% 00:31:21.032 cpu : usr=1.20%, sys=2.00%, ctx=1324, majf=0, minf=2 00:31:21.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.032 issued rwts: total=512,812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.032 job2: (groupid=0, jobs=1): err= 0: pid=3684777: Tue Nov 19 17:49:22 2024 00:31:21.032 read: IOPS=606, BW=2424KiB/s (2482kB/s)(2424KiB/1000msec) 00:31:21.032 slat (nsec): min=7197, max=40956, avg=8862.08, stdev=2927.43 00:31:21.032 clat (usec): min=204, max=41149, avg=1308.83, stdev=6543.93 00:31:21.032 lat (usec): min=212, max=41162, avg=1317.70, stdev=6545.44 00:31:21.032 clat percentiles (usec): 00:31:21.032 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:31:21.032 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:31:21.032 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 262], 00:31:21.032 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:21.032 | 99.99th=[41157] 00:31:21.032 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:21.032 slat (nsec): min=10325, max=70598, avg=12604.94, stdev=3453.38 00:31:21.032 clat (usec): min=131, max=379, avg=179.38, stdev=23.04 00:31:21.032 lat (usec): min=145, max=393, avg=191.98, stdev=24.31 00:31:21.032 clat percentiles (usec): 00:31:21.032 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 159], 00:31:21.032 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 186], 00:31:21.032 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:31:21.032 | 99.00th=[ 225], 99.50th=[ 243], 99.90th=[ 359], 99.95th=[ 379], 00:31:21.032 | 99.99th=[ 379] 00:31:21.032 bw ( KiB/s): min= 4087, max= 4087, per=30.47%, avg=4087.00, stdev= 0.00, samples=1 00:31:21.032 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:21.032 lat (usec) : 250=94.05%, 500=4.97% 00:31:21.032 lat (msec) : 50=0.98% 00:31:21.032 cpu : usr=1.20%, sys=2.90%, ctx=1632, majf=0, minf=2 00:31:21.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.032 issued rwts: total=606,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.032 job3: (groupid=0, jobs=1): err= 0: pid=3684778: Tue Nov 19 17:49:22 2024 00:31:21.032 read: IOPS=326, BW=1308KiB/s (1339kB/s)(1356KiB/1037msec) 00:31:21.032 slat (nsec): min=6910, max=30724, avg=8821.62, stdev=4097.22 00:31:21.032 clat (usec): min=178, max=42022, avg=2725.75, stdev=9861.68 00:31:21.032 lat (usec): min=185, max=42045, avg=2734.57, stdev=9865.24 00:31:21.032 clat percentiles (usec): 00:31:21.032 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:31:21.032 | 30.00th=[ 188], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:31:21.032 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 249], 95.00th=[41157], 00:31:21.032 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:21.032 | 99.99th=[42206] 00:31:21.032 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:31:21.032 slat (usec): min=9, max=15061, avg=40.29, stdev=665.15 00:31:21.032 clat (usec): min=142, max=333, avg=168.96, stdev=13.35 00:31:21.032 lat (usec): min=156, max=15395, avg=209.25, stdev=672.54 00:31:21.032 clat percentiles (usec): 00:31:21.032 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:31:21.032 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:31:21.032 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:31:21.032 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 334], 99.95th=[ 334], 00:31:21.032 | 99.99th=[ 334] 00:31:21.032 bw ( KiB/s): min= 4087, max= 4087, per=30.47%, avg=4087.00, stdev= 0.00, samples=1 00:31:21.032 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:21.032 lat (usec) : 250=96.71%, 500=0.82% 00:31:21.032 lat (msec) : 50=2.47% 00:31:21.032 cpu : usr=0.29%, sys=0.87%, ctx=855, majf=0, minf=1 00:31:21.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.032 issued rwts: total=339,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.032 00:31:21.032 Run status group 0 (all jobs): 00:31:21.032 READ: bw=9570KiB/s (9800kB/s), 1308KiB/s-4092KiB/s (1339kB/s-4190kB/s), io=9924KiB (10.2MB), run=1000-1037msec 00:31:21.032 WRITE: bw=13.1MiB/s (13.7MB/s), 1975KiB/s-4511KiB/s (2022kB/s-4620kB/s), io=13.6MiB (14.2MB), run=1001-1037msec 00:31:21.032 00:31:21.032 Disk stats (read/write): 00:31:21.032 nvme0n1: ios=871/1024, merge=0/0, ticks=1484/157, in_queue=1641, util=87.36% 00:31:21.032 nvme0n2: ios=308/512, merge=0/0, ticks=758/88, in_queue=846, util=85.67% 00:31:21.032 nvme0n3: ios=416/512, merge=0/0, ticks=966/90, in_queue=1056, util=93.90% 00:31:21.032 nvme0n4: ios=393/512, merge=0/0, ticks=1161/83, in_queue=1244, util=100.00% 00:31:21.032 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:21.032 [global] 00:31:21.032 thread=1 00:31:21.032 invalidate=1 00:31:21.032 rw=randwrite 00:31:21.032 time_based=1 00:31:21.032 runtime=1 00:31:21.032 ioengine=libaio 00:31:21.032 direct=1 00:31:21.032 bs=4096 00:31:21.032 iodepth=1 00:31:21.032 norandommap=0 00:31:21.032 numjobs=1 00:31:21.032 00:31:21.032 verify_dump=1 00:31:21.032 verify_backlog=512 00:31:21.032 verify_state_save=0 00:31:21.032 do_verify=1 00:31:21.032 verify=crc32c-intel 00:31:21.032 [job0] 00:31:21.032 filename=/dev/nvme0n1 00:31:21.032 [job1] 00:31:21.032 filename=/dev/nvme0n2 00:31:21.032 [job2] 00:31:21.032 filename=/dev/nvme0n3 00:31:21.032 [job3] 00:31:21.032 filename=/dev/nvme0n4 00:31:21.032 Could not set queue depth (nvme0n1) 00:31:21.032 Could not set queue depth (nvme0n2) 00:31:21.032 Could not set queue depth (nvme0n3) 00:31:21.032 Could not set queue depth (nvme0n4) 00:31:21.293 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:21.293 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:21.293 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:21.293 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:21.293 fio-3.35 00:31:21.293 Starting 4 threads 00:31:22.665 00:31:22.665 job0: (groupid=0, jobs=1): err= 0: pid=3685148: Tue Nov 19 17:49:24 2024 00:31:22.665 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:31:22.665 slat (nsec): min=10421, max=31093, avg=21529.78, stdev=5019.66 00:31:22.665 clat (usec): min=242, max=41982, avg=39275.89, stdev=8513.99 00:31:22.665 lat (usec): min=252, max=42011, avg=39297.42, stdev=8516.49 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[ 243], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:22.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:22.665 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:22.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:22.665 | 99.99th=[42206] 00:31:22.665 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:31:22.665 slat (nsec): min=12188, max=50843, avg=15179.27, stdev=5000.02 00:31:22.665 clat (usec): min=132, max=323, avg=182.10, stdev=19.95 00:31:22.665 lat (usec): min=156, max=355, avg=197.28, stdev=20.82 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:31:22.665 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:31:22.665 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 212], 00:31:22.665 | 99.00th=[ 255], 99.50th=[ 281], 99.90th=[ 322], 99.95th=[ 322], 00:31:22.665 | 99.99th=[ 322] 00:31:22.665 bw ( KiB/s): min= 4096, max= 4096, per=22.96%, avg=4096.00, stdev= 0.00, samples=1 00:31:22.665 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:22.665 lat (usec) : 250=94.58%, 500=1.31% 00:31:22.665 lat (msec) : 50=4.11% 00:31:22.665 cpu : usr=0.60%, sys=0.89%, ctx=536, majf=0, minf=1 00:31:22.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.665 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:22.665 job1: (groupid=0, jobs=1): err= 0: pid=3685149: Tue Nov 19 17:49:24 2024 00:31:22.665 read: IOPS=2354, BW=9419KiB/s (9645kB/s)(9428KiB/1001msec) 00:31:22.665 slat (nsec): min=6442, max=25611, avg=7280.45, stdev=885.08 00:31:22.665 clat (usec): min=179, max=374, avg=232.26, stdev=33.24 00:31:22.665 lat (usec): min=191, max=382, avg=239.54, stdev=33.24 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:31:22.665 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 243], 60.00th=[ 247], 00:31:22.665 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 281], 00:31:22.665 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 310], 99.95th=[ 330], 00:31:22.665 | 99.99th=[ 375] 00:31:22.665 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:22.665 slat (nsec): min=3387, max=68880, avg=10570.24, stdev=1954.00 00:31:22.665 clat (usec): min=118, max=684, avg=155.46, stdev=32.56 00:31:22.665 lat (usec): min=127, max=689, avg=166.03, stdev=33.17 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 133], 00:31:22.665 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 153], 00:31:22.665 | 70.00th=[ 165], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 206], 00:31:22.665 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 404], 99.95th=[ 412], 00:31:22.665 | 99.99th=[ 685] 00:31:22.665 bw ( KiB/s): min=10488, max=10488, per=58.78%, avg=10488.00, stdev= 0.00, samples=1 00:31:22.665 iops : min= 2622, max= 2622, avg=2622.00, stdev= 0.00, samples=1 00:31:22.665 lat (usec) : 250=84.30%, 500=15.68%, 750=0.02% 00:31:22.665 cpu : usr=2.40%, sys=4.50%, ctx=4918, majf=0, minf=1 00:31:22.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.665 issued rwts: total=2357,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:22.665 job2: (groupid=0, jobs=1): err= 0: pid=3685150: Tue Nov 19 17:49:24 2024 00:31:22.665 read: IOPS=544, BW=2180KiB/s (2232kB/s)(2184KiB/1002msec) 00:31:22.665 slat (nsec): min=4742, max=25478, avg=8584.63, stdev=2105.90 00:31:22.665 clat (usec): min=225, max=41448, avg=1453.12, stdev=6882.99 00:31:22.665 lat (usec): min=238, max=41456, avg=1461.71, stdev=6883.19 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[ 237], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:31:22.665 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:31:22.665 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 293], 00:31:22.665 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:22.665 | 99.99th=[41681] 00:31:22.665 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:31:22.665 slat (nsec): min=9619, max=48352, avg=11083.88, stdev=1907.38 00:31:22.665 clat (usec): min=149, max=430, avg=183.65, stdev=19.61 00:31:22.665 lat (usec): min=160, max=464, avg=194.73, stdev=20.33 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:31:22.665 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:31:22.665 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 210], 00:31:22.665 | 99.00th=[ 227], 99.50th=[ 318], 99.90th=[ 375], 99.95th=[ 433], 00:31:22.665 | 99.99th=[ 433] 00:31:22.665 bw ( KiB/s): min= 8192, max= 8192, per=45.91%, avg=8192.00, stdev= 0.00, samples=1 00:31:22.665 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:22.665 lat (usec) : 250=74.78%, 500=24.20% 00:31:22.665 lat (msec) : 50=1.02% 00:31:22.665 cpu : usr=0.40%, sys=2.00%, ctx=1571, majf=0, minf=1 00:31:22.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.665 issued rwts: total=546,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:22.665 job3: (groupid=0, jobs=1): err= 0: pid=3685151: Tue Nov 19 17:49:24 2024 00:31:22.665 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:31:22.665 slat (nsec): min=10341, max=27831, avg=23297.64, stdev=3399.85 00:31:22.665 clat (usec): min=40845, max=41241, avg=40984.55, stdev=89.55 00:31:22.665 lat (usec): min=40873, max=41251, avg=41007.85, stdev=87.85 00:31:22.665 clat percentiles (usec): 00:31:22.665 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:22.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:22.665 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:22.665 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:22.665 | 99.99th=[41157] 00:31:22.666 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:31:22.666 slat (nsec): min=9551, max=39569, avg=11426.75, stdev=1724.99 00:31:22.666 clat (usec): min=161, max=365, avg=240.02, stdev=12.88 00:31:22.666 lat (usec): min=172, max=405, avg=251.45, stdev=13.37 00:31:22.666 clat percentiles (usec): 00:31:22.666 | 1.00th=[ 180], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 239], 00:31:22.666 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 243], 00:31:22.666 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 251], 00:31:22.666 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 367], 99.95th=[ 367], 00:31:22.666 | 99.99th=[ 367] 00:31:22.666 bw ( KiB/s): min= 4096, max= 4096, per=22.96%, avg=4096.00, stdev= 0.00, samples=1 00:31:22.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:22.666 lat (usec) : 250=91.01%, 500=4.87% 00:31:22.666 lat (msec) : 50=4.12% 00:31:22.666 cpu : usr=0.10%, sys=0.78%, ctx=535, majf=0, minf=1 00:31:22.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.666 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:22.666 00:31:22.666 Run status group 0 (all jobs): 00:31:22.666 READ: bw=11.1MiB/s (11.7MB/s), 85.2KiB/s-9419KiB/s (87.2kB/s-9645kB/s), io=11.5MiB (12.1MB), run=1001-1033msec 00:31:22.666 WRITE: bw=17.4MiB/s (18.3MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1033msec 00:31:22.666 00:31:22.666 Disk stats (read/write): 00:31:22.666 nvme0n1: ios=51/512, merge=0/0, ticks=1581/81, in_queue=1662, util=98.90% 00:31:22.666 nvme0n2: ios=2083/2109, merge=0/0, ticks=1118/326, in_queue=1444, util=96.86% 00:31:22.666 nvme0n3: ios=576/1024, merge=0/0, ticks=1481/181, in_queue=1662, util=97.51% 00:31:22.666 nvme0n4: ios=75/512, merge=0/0, ticks=874/123, in_queue=997, util=98.53% 00:31:22.666 17:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:22.666 [global] 00:31:22.666 thread=1 00:31:22.666 invalidate=1 00:31:22.666 rw=write 00:31:22.666 time_based=1 00:31:22.666 runtime=1 00:31:22.666 ioengine=libaio 00:31:22.666 direct=1 00:31:22.666 bs=4096 00:31:22.666 iodepth=128 00:31:22.666 norandommap=0 00:31:22.666 numjobs=1 00:31:22.666 00:31:22.666 verify_dump=1 00:31:22.666 verify_backlog=512 00:31:22.666 verify_state_save=0 00:31:22.666 do_verify=1 00:31:22.666 verify=crc32c-intel 00:31:22.666 [job0] 00:31:22.666 filename=/dev/nvme0n1 00:31:22.666 [job1] 00:31:22.666 filename=/dev/nvme0n2 00:31:22.666 [job2] 00:31:22.666 filename=/dev/nvme0n3 00:31:22.666 [job3] 00:31:22.666 filename=/dev/nvme0n4 00:31:22.666 Could not set queue depth (nvme0n1) 00:31:22.666 Could not set queue depth (nvme0n2) 00:31:22.666 Could not set queue depth (nvme0n3) 00:31:22.666 Could not set queue depth (nvme0n4) 00:31:22.666 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.666 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.666 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.666 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:22.666 fio-3.35 00:31:22.666 Starting 4 threads 00:31:24.041 00:31:24.041 job0: (groupid=0, jobs=1): err= 0: pid=3685518: Tue Nov 19 17:49:26 2024 00:31:24.041 read: IOPS=3975, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1004msec) 00:31:24.041 slat (nsec): min=1486, max=43592k, avg=118099.26, stdev=1084764.23 00:31:24.041 clat (usec): min=688, max=73337, avg=16168.67, stdev=10932.65 00:31:24.041 lat (usec): min=3415, max=73345, avg=16286.77, stdev=10972.59 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 7046], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:31:24.041 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:31:24.041 | 70.00th=[13304], 80.00th=[13829], 90.00th=[22676], 95.00th=[47449], 00:31:24.041 | 99.00th=[70779], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:31:24.041 | 99.99th=[72877] 00:31:24.041 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:31:24.041 slat (usec): min=2, max=20606, avg=108.74, stdev=755.39 00:31:24.041 clat (usec): min=7475, max=73375, avg=14577.71, stdev=7402.90 00:31:24.041 lat (usec): min=7488, max=76052, avg=14686.46, stdev=7424.00 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10945], 20.00th=[11994], 00:31:24.041 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:31:24.041 | 70.00th=[13042], 80.00th=[13435], 90.00th=[21627], 95.00th=[26608], 00:31:24.041 | 99.00th=[54789], 99.50th=[57410], 99.90th=[72877], 99.95th=[72877], 00:31:24.041 | 99.99th=[72877] 00:31:24.041 bw ( KiB/s): min=16384, max=16384, per=21.82%, avg=16384.00, stdev= 0.00, samples=2 00:31:24.041 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:31:24.041 lat (usec) : 750=0.01% 00:31:24.041 lat (msec) : 4=0.40%, 10=3.44%, 20=83.92%, 50=9.14%, 100=3.09% 00:31:24.041 cpu : usr=2.99%, sys=5.48%, ctx=375, majf=0, minf=1 00:31:24.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:24.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.041 issued rwts: total=3991,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.041 job1: (groupid=0, jobs=1): err= 0: pid=3685519: Tue Nov 19 17:49:26 2024 00:31:24.041 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1009msec) 00:31:24.041 slat (nsec): min=1318, max=11187k, avg=93075.56, stdev=797055.34 00:31:24.041 clat (usec): min=3207, max=22959, avg=11680.46, stdev=2742.10 00:31:24.041 lat (usec): min=4675, max=30047, avg=11773.53, stdev=2828.02 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 5866], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9765], 00:31:24.041 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:31:24.041 | 70.00th=[11994], 80.00th=[12649], 90.00th=[15270], 95.00th=[17957], 00:31:24.041 | 99.00th=[21365], 99.50th=[22152], 99.90th=[22676], 99.95th=[22938], 00:31:24.041 | 99.99th=[22938] 00:31:24.041 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:31:24.041 slat (usec): min=2, max=20812, avg=86.14, stdev=710.20 00:31:24.041 clat (usec): min=1878, max=69167, avg=11214.80, stdev=5017.96 00:31:24.041 lat (usec): min=2169, max=69181, avg=11300.94, stdev=5087.71 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 4080], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 8455], 00:31:24.041 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10945], 60.00th=[11207], 00:31:24.041 | 70.00th=[11731], 80.00th=[12256], 90.00th=[15139], 95.00th=[17171], 00:31:24.041 | 99.00th=[38536], 99.50th=[48497], 99.90th=[68682], 99.95th=[68682], 00:31:24.041 | 99.99th=[68682] 00:31:24.041 bw ( KiB/s): min=20480, max=23800, per=29.48%, avg=22140.00, stdev=2347.59, samples=2 00:31:24.041 iops : min= 5120, max= 5950, avg=5535.00, stdev=586.90, samples=2 00:31:24.041 lat (msec) : 2=0.01%, 4=0.45%, 10=27.03%, 20=69.90%, 50=2.47% 00:31:24.041 lat (msec) : 100=0.14% 00:31:24.041 cpu : usr=4.66%, sys=6.35%, ctx=348, majf=0, minf=1 00:31:24.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:24.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.041 issued rwts: total=5150,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.041 job2: (groupid=0, jobs=1): err= 0: pid=3685520: Tue Nov 19 17:49:26 2024 00:31:24.041 read: IOPS=5036, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1008msec) 00:31:24.041 slat (nsec): min=1400, max=12888k, avg=105427.86, stdev=893699.11 00:31:24.041 clat (usec): min=1615, max=26489, avg=13484.89, stdev=3548.99 00:31:24.041 lat (usec): min=3582, max=33942, avg=13590.32, stdev=3630.25 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10945], 00:31:24.041 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13042], 60.00th=[13173], 00:31:24.041 | 70.00th=[13829], 80.00th=[15139], 90.00th=[19268], 95.00th=[21627], 00:31:24.041 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:31:24.041 | 99.99th=[26608] 00:31:24.041 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:31:24.041 slat (usec): min=2, max=11415, avg=85.55, stdev=637.77 00:31:24.041 clat (usec): min=1549, max=25394, avg=11539.26, stdev=3234.02 00:31:24.041 lat (usec): min=1562, max=25398, avg=11624.81, stdev=3281.61 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 3949], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 8848], 00:31:24.041 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[12387], 00:31:24.041 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[15795], 00:31:24.041 | 99.00th=[20055], 99.50th=[20317], 99.90th=[25297], 99.95th=[25297], 00:31:24.041 | 99.99th=[25297] 00:31:24.041 bw ( KiB/s): min=20088, max=20872, per=27.27%, avg=20480.00, stdev=554.37, samples=2 00:31:24.041 iops : min= 5022, max= 5218, avg=5120.00, stdev=138.59, samples=2 00:31:24.041 lat (msec) : 2=0.03%, 4=0.91%, 10=17.59%, 20=76.57%, 50=4.89% 00:31:24.041 cpu : usr=3.67%, sys=6.45%, ctx=388, majf=0, minf=2 00:31:24.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:24.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.041 issued rwts: total=5077,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.041 job3: (groupid=0, jobs=1): err= 0: pid=3685522: Tue Nov 19 17:49:26 2024 00:31:24.041 read: IOPS=3949, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1005msec) 00:31:24.041 slat (nsec): min=1423, max=17185k, avg=129039.66, stdev=1083838.70 00:31:24.041 clat (usec): min=1281, max=44601, avg=16313.33, stdev=5031.62 00:31:24.041 lat (usec): min=4242, max=44652, avg=16442.37, stdev=5128.98 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 9110], 5.00th=[11731], 10.00th=[12780], 20.00th=[13304], 00:31:24.041 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:31:24.041 | 70.00th=[16188], 80.00th=[20317], 90.00th=[23200], 95.00th=[25297], 00:31:24.041 | 99.00th=[39060], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:31:24.041 | 99.99th=[44827] 00:31:24.041 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:31:24.041 slat (usec): min=2, max=26873, avg=112.96, stdev=960.96 00:31:24.041 clat (usec): min=1520, max=40542, avg=14416.54, stdev=4482.39 00:31:24.041 lat (usec): min=2615, max=40554, avg=14529.50, stdev=4540.09 00:31:24.041 clat percentiles (usec): 00:31:24.041 | 1.00th=[ 5211], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[11731], 00:31:24.041 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13698], 60.00th=[14091], 00:31:24.041 | 70.00th=[14746], 80.00th=[16581], 90.00th=[20317], 95.00th=[20841], 00:31:24.041 | 99.00th=[32900], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:31:24.041 | 99.99th=[40633] 00:31:24.041 bw ( KiB/s): min=16384, max=16384, per=21.82%, avg=16384.00, stdev= 0.00, samples=2 00:31:24.041 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:31:24.041 lat (msec) : 2=0.02%, 4=0.17%, 10=6.89%, 20=77.23%, 50=15.67% 00:31:24.041 cpu : usr=2.29%, sys=6.57%, ctx=263, majf=0, minf=1 00:31:24.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:24.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.041 issued rwts: total=3969,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.041 00:31:24.041 Run status group 0 (all jobs): 00:31:24.041 READ: bw=70.4MiB/s (73.8MB/s), 15.4MiB/s-19.9MiB/s (16.2MB/s-20.9MB/s), io=71.0MiB (74.5MB), run=1004-1009msec 00:31:24.041 WRITE: bw=73.3MiB/s (76.9MB/s), 15.9MiB/s-21.8MiB/s (16.7MB/s-22.9MB/s), io=74.0MiB (77.6MB), run=1004-1009msec 00:31:24.041 00:31:24.041 Disk stats (read/write): 00:31:24.041 nvme0n1: ios=3124/3543, merge=0/0, ticks=23415/18922, in_queue=42337, util=93.89% 00:31:24.042 nvme0n2: ios=4496/4608, merge=0/0, ticks=50502/45794, in_queue=96296, util=96.55% 00:31:24.042 nvme0n3: ios=4193/4608, merge=0/0, ticks=54441/50557, in_queue=104998, util=96.05% 00:31:24.042 nvme0n4: ios=3131/3582, merge=0/0, ticks=50697/51015, in_queue=101712, util=98.11% 00:31:24.042 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:24.042 [global] 00:31:24.042 thread=1 00:31:24.042 invalidate=1 00:31:24.042 rw=randwrite 00:31:24.042 time_based=1 00:31:24.042 runtime=1 00:31:24.042 ioengine=libaio 00:31:24.042 direct=1 00:31:24.042 bs=4096 00:31:24.042 iodepth=128 00:31:24.042 norandommap=0 00:31:24.042 numjobs=1 00:31:24.042 00:31:24.042 verify_dump=1 00:31:24.042 verify_backlog=512 00:31:24.042 verify_state_save=0 00:31:24.042 do_verify=1 00:31:24.042 verify=crc32c-intel 00:31:24.042 [job0] 00:31:24.042 filename=/dev/nvme0n1 00:31:24.042 [job1] 00:31:24.042 filename=/dev/nvme0n2 00:31:24.042 [job2] 00:31:24.042 filename=/dev/nvme0n3 00:31:24.042 [job3] 00:31:24.042 filename=/dev/nvme0n4 00:31:24.042 Could not set queue depth (nvme0n1) 00:31:24.042 Could not set queue depth (nvme0n2) 00:31:24.042 Could not set queue depth (nvme0n3) 00:31:24.042 Could not set queue depth (nvme0n4) 00:31:24.300 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:24.300 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:24.300 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:24.300 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:24.300 fio-3.35 00:31:24.300 Starting 4 threads 00:31:25.673 00:31:25.673 job0: (groupid=0, jobs=1): err= 0: pid=3685892: Tue Nov 19 17:49:27 2024 00:31:25.673 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:31:25.673 slat (nsec): min=1221, max=27014k, avg=110688.42, stdev=1101421.23 00:31:25.673 clat (usec): min=1222, max=75297, avg=15940.21, stdev=10572.49 00:31:25.673 lat (usec): min=1225, max=75302, avg=16050.90, stdev=10659.50 00:31:25.673 clat percentiles (usec): 00:31:25.673 | 1.00th=[ 2868], 5.00th=[ 5800], 10.00th=[ 7635], 20.00th=[ 9503], 00:31:25.673 | 30.00th=[10028], 40.00th=[11863], 50.00th=[12911], 60.00th=[14222], 00:31:25.673 | 70.00th=[16909], 80.00th=[20317], 90.00th=[27657], 95.00th=[36439], 00:31:25.673 | 99.00th=[57934], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:31:25.673 | 99.99th=[74974] 00:31:25.673 write: IOPS=4867, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1006msec); 0 zone resets 00:31:25.673 slat (usec): min=2, max=18412, avg=84.70, stdev=858.14 00:31:25.673 clat (usec): min=521, max=42174, avg=12741.19, stdev=5868.66 00:31:25.673 lat (usec): min=592, max=42202, avg=12825.89, stdev=5948.09 00:31:25.673 clat percentiles (usec): 00:31:25.673 | 1.00th=[ 3097], 5.00th=[ 5604], 10.00th=[ 6521], 20.00th=[ 7504], 00:31:25.673 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[13566], 00:31:25.673 | 70.00th=[16712], 80.00th=[17695], 90.00th=[20579], 95.00th=[23725], 00:31:25.673 | 99.00th=[31851], 99.50th=[34341], 99.90th=[34341], 99.95th=[35914], 00:31:25.673 | 99.99th=[42206] 00:31:25.673 bw ( KiB/s): min=16384, max=21776, per=27.85%, avg=19080.00, stdev=3812.72, samples=2 00:31:25.673 iops : min= 4096, max= 5444, avg=4770.00, stdev=953.18, samples=2 00:31:25.673 lat (usec) : 750=0.07% 00:31:25.673 lat (msec) : 2=0.34%, 4=1.88%, 10=35.79%, 20=46.63%, 50=14.50% 00:31:25.673 lat (msec) : 100=0.79% 00:31:25.673 cpu : usr=3.28%, sys=4.98%, ctx=212, majf=0, minf=1 00:31:25.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:25.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.673 issued rwts: total=4096,4897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.673 job1: (groupid=0, jobs=1): err= 0: pid=3685893: Tue Nov 19 17:49:27 2024 00:31:25.673 read: IOPS=3605, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1009msec) 00:31:25.673 slat (nsec): min=1382, max=19446k, avg=116268.75, stdev=986960.01 00:31:25.673 clat (usec): min=3081, max=50843, avg=14917.88, stdev=6421.46 00:31:25.673 lat (usec): min=3092, max=50870, avg=15034.15, stdev=6491.93 00:31:25.673 clat percentiles (usec): 00:31:25.673 | 1.00th=[ 6980], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10552], 00:31:25.673 | 30.00th=[11469], 40.00th=[11731], 50.00th=[13042], 60.00th=[14353], 00:31:25.673 | 70.00th=[16188], 80.00th=[17171], 90.00th=[21627], 95.00th=[30016], 00:31:25.673 | 99.00th=[38536], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:31:25.673 | 99.99th=[50594] 00:31:25.673 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:31:25.673 slat (usec): min=2, max=77233, avg=135.59, stdev=1525.63 00:31:25.673 clat (msec): min=2, max=101, avg=17.97, stdev=13.89 00:31:25.673 lat (msec): min=2, max=101, avg=18.10, stdev=13.99 00:31:25.673 clat percentiles (msec): 00:31:25.673 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:31:25.673 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 18], 00:31:25.673 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 27], 95.00th=[ 30], 00:31:25.673 | 99.00th=[ 82], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:31:25.673 | 99.99th=[ 102] 00:31:25.673 bw ( KiB/s): min=12288, max=19888, per=23.48%, avg=16088.00, stdev=5374.01, samples=2 00:31:25.673 iops : min= 3072, max= 4972, avg=4022.00, stdev=1343.50, samples=2 00:31:25.673 lat (msec) : 4=0.50%, 10=15.18%, 20=63.85%, 50=18.81%, 100=1.23% 00:31:25.673 lat (msec) : 250=0.43% 00:31:25.673 cpu : usr=1.79%, sys=5.56%, ctx=296, majf=0, minf=1 00:31:25.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:25.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.673 issued rwts: total=3638,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.673 job2: (groupid=0, jobs=1): err= 0: pid=3685894: Tue Nov 19 17:49:27 2024 00:31:25.673 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:31:25.673 slat (nsec): min=1091, max=49741k, avg=208985.41, stdev=1932229.38 00:31:25.673 clat (usec): min=3059, max=94239, avg=27259.82, stdev=24574.71 00:31:25.673 lat (usec): min=3062, max=94246, avg=27468.80, stdev=24722.85 00:31:25.673 clat percentiles (usec): 00:31:25.674 | 1.00th=[ 5145], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9896], 00:31:25.674 | 30.00th=[11338], 40.00th=[13173], 50.00th=[15401], 60.00th=[17957], 00:31:25.674 | 70.00th=[25822], 80.00th=[48497], 90.00th=[72877], 95.00th=[84411], 00:31:25.674 | 99.00th=[93848], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:31:25.674 | 99.99th=[93848] 00:31:25.674 write: IOPS=3158, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1004msec); 0 zone resets 00:31:25.674 slat (usec): min=2, max=16583, avg=102.00, stdev=669.52 00:31:25.674 clat (usec): min=1454, max=48440, avg=13691.16, stdev=7476.37 00:31:25.674 lat (usec): min=1465, max=48448, avg=13793.16, stdev=7533.34 00:31:25.674 clat percentiles (usec): 00:31:25.674 | 1.00th=[ 3195], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 8455], 00:31:25.674 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[12125], 60.00th=[13304], 00:31:25.674 | 70.00th=[14091], 80.00th=[19006], 90.00th=[23987], 95.00th=[25560], 00:31:25.674 | 99.00th=[41681], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:31:25.674 | 99.99th=[48497] 00:31:25.674 bw ( KiB/s): min=12288, max=12296, per=17.94%, avg=12292.00, stdev= 5.66, samples=2 00:31:25.674 iops : min= 3072, max= 3074, avg=3073.00, stdev= 1.41, samples=2 00:31:25.674 lat (msec) : 2=0.13%, 4=0.91%, 10=32.71%, 20=38.20%, 50=18.64% 00:31:25.674 lat (msec) : 100=9.40% 00:31:25.674 cpu : usr=1.60%, sys=2.89%, ctx=292, majf=0, minf=1 00:31:25.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:25.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.674 issued rwts: total=3072,3171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.674 job3: (groupid=0, jobs=1): err= 0: pid=3685895: Tue Nov 19 17:49:27 2024 00:31:25.674 read: IOPS=4701, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1007msec) 00:31:25.674 slat (nsec): min=1439, max=16805k, avg=95984.06, stdev=828942.64 00:31:25.674 clat (usec): min=1154, max=42712, avg=12890.54, stdev=6275.61 00:31:25.674 lat (usec): min=3386, max=42736, avg=12986.52, stdev=6329.82 00:31:25.674 clat percentiles (usec): 00:31:25.674 | 1.00th=[ 3687], 5.00th=[ 5407], 10.00th=[ 7635], 20.00th=[ 8979], 00:31:25.674 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11076], 60.00th=[12649], 00:31:25.674 | 70.00th=[13566], 80.00th=[15926], 90.00th=[20841], 95.00th=[24773], 00:31:25.674 | 99.00th=[36439], 99.50th=[38536], 99.90th=[38536], 99.95th=[40109], 00:31:25.674 | 99.99th=[42730] 00:31:25.674 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:31:25.674 slat (usec): min=2, max=26748, avg=96.42, stdev=852.67 00:31:25.674 clat (usec): min=2724, max=50753, avg=12997.13, stdev=5496.56 00:31:25.674 lat (usec): min=2734, max=50775, avg=13093.55, stdev=5561.10 00:31:25.674 clat percentiles (usec): 00:31:25.674 | 1.00th=[ 3425], 5.00th=[ 6521], 10.00th=[ 8094], 20.00th=[ 9241], 00:31:25.674 | 30.00th=[10028], 40.00th=[11338], 50.00th=[11863], 60.00th=[12256], 00:31:25.674 | 70.00th=[13304], 80.00th=[15926], 90.00th=[20317], 95.00th=[23987], 00:31:25.674 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:31:25.674 | 99.99th=[50594] 00:31:25.674 bw ( KiB/s): min=19504, max=21440, per=29.88%, avg=20472.00, stdev=1368.96, samples=2 00:31:25.674 iops : min= 4876, max= 5360, avg=5118.00, stdev=342.24, samples=2 00:31:25.674 lat (msec) : 2=0.01%, 4=1.74%, 10=30.47%, 20=57.35%, 50=10.42% 00:31:25.674 lat (msec) : 100=0.01% 00:31:25.674 cpu : usr=3.38%, sys=5.77%, ctx=412, majf=0, minf=1 00:31:25.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:25.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.674 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.674 00:31:25.674 Run status group 0 (all jobs): 00:31:25.674 READ: bw=60.2MiB/s (63.1MB/s), 12.0MiB/s-18.4MiB/s (12.5MB/s-19.3MB/s), io=60.7MiB (63.7MB), run=1004-1009msec 00:31:25.674 WRITE: bw=66.9MiB/s (70.2MB/s), 12.3MiB/s-19.9MiB/s (12.9MB/s-20.8MB/s), io=67.5MiB (70.8MB), run=1004-1009msec 00:31:25.674 00:31:25.674 Disk stats (read/write): 00:31:25.674 nvme0n1: ios=3433/4096, merge=0/0, ticks=50304/51909, in_queue=102213, util=86.67% 00:31:25.674 nvme0n2: ios=3100/3128, merge=0/0, ticks=46494/50623, in_queue=97117, util=97.36% 00:31:25.674 nvme0n3: ios=2359/2560, merge=0/0, ticks=30788/24179, in_queue=54967, util=98.12% 00:31:25.674 nvme0n4: ios=4121/4219, merge=0/0, ticks=51175/52468, in_queue=103643, util=97.27% 00:31:25.674 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:25.674 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3686124 00:31:25.674 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:25.674 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:25.674 [global] 00:31:25.674 thread=1 00:31:25.674 invalidate=1 00:31:25.674 rw=read 00:31:25.674 time_based=1 00:31:25.674 runtime=10 00:31:25.674 ioengine=libaio 00:31:25.674 direct=1 00:31:25.674 bs=4096 00:31:25.674 iodepth=1 00:31:25.674 norandommap=1 00:31:25.674 numjobs=1 00:31:25.674 00:31:25.674 [job0] 00:31:25.674 filename=/dev/nvme0n1 00:31:25.674 [job1] 00:31:25.674 filename=/dev/nvme0n2 00:31:25.674 [job2] 00:31:25.674 filename=/dev/nvme0n3 00:31:25.674 [job3] 00:31:25.674 filename=/dev/nvme0n4 00:31:25.674 Could not set queue depth (nvme0n1) 00:31:25.674 Could not set queue depth (nvme0n2) 00:31:25.674 Could not set queue depth (nvme0n3) 00:31:25.674 Could not set queue depth (nvme0n4) 00:31:25.932 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:25.932 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:25.932 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:25.932 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:25.932 fio-3.35 00:31:25.932 Starting 4 threads 00:31:28.500 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:28.758 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:28.758 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:31:28.758 fio: pid=3686263, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:29.016 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.016 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:29.016 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1757184, buflen=4096 00:31:29.016 fio: pid=3686262, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:29.273 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.273 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:29.273 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=19894272, buflen=4096 00:31:29.273 fio: pid=3686260, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:29.273 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45826048, buflen=4096 00:31:29.273 fio: pid=3686261, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:29.273 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.273 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:29.531 00:31:29.531 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3686260: Tue Nov 19 17:49:31 2024 00:31:29.532 read: IOPS=1545, BW=6181KiB/s (6330kB/s)(19.0MiB/3143msec) 00:31:29.532 slat (usec): min=6, max=13541, avg=17.19, stdev=338.23 00:31:29.532 clat (usec): min=189, max=41212, avg=623.53, stdev=3951.02 00:31:29.532 lat (usec): min=196, max=41232, avg=640.72, stdev=3965.90 00:31:29.532 clat percentiles (usec): 00:31:29.532 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 215], 00:31:29.532 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:31:29.532 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 277], 95.00th=[ 289], 00:31:29.532 | 99.00th=[ 3064], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:29.532 | 99.99th=[41157] 00:31:29.532 bw ( KiB/s): min= 96, max=16472, per=30.68%, avg=6096.17, stdev=6523.69, samples=6 00:31:29.532 iops : min= 24, max= 4118, avg=1524.00, stdev=1630.88, samples=6 00:31:29.532 lat (usec) : 250=81.97%, 500=16.94% 00:31:29.532 lat (msec) : 2=0.06%, 4=0.02%, 10=0.02%, 20=0.02%, 50=0.95% 00:31:29.532 cpu : usr=0.35%, sys=1.50%, ctx=4865, majf=0, minf=1 00:31:29.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.532 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3686261: Tue Nov 19 17:49:31 2024 00:31:29.532 read: IOPS=3360, BW=13.1MiB/s (13.8MB/s)(43.7MiB/3330msec) 00:31:29.532 slat (usec): min=5, max=17912, avg=12.45, stdev=256.32 00:31:29.532 clat (usec): min=180, max=41133, avg=281.93, stdev=1088.55 00:31:29.532 lat (usec): min=187, max=49874, avg=294.38, stdev=1147.63 00:31:29.532 clat percentiles (usec): 00:31:29.532 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:31:29.532 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 249], 60.00th=[ 251], 00:31:29.532 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 258], 95.00th=[ 262], 00:31:29.532 | 99.00th=[ 404], 99.50th=[ 412], 99.90th=[ 515], 99.95th=[40633], 00:31:29.532 | 99.99th=[41157] 00:31:29.532 bw ( KiB/s): min= 9860, max=15624, per=73.36%, avg=14575.33, stdev=2311.13, samples=6 00:31:29.532 iops : min= 2465, max= 3906, avg=3643.83, stdev=577.78, samples=6 00:31:29.532 lat (usec) : 250=52.63%, 500=47.22%, 750=0.06% 00:31:29.532 lat (msec) : 2=0.01%, 50=0.07% 00:31:29.532 cpu : usr=0.81%, sys=3.03%, ctx=11196, majf=0, minf=2 00:31:29.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 issued rwts: total=11189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.532 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3686262: Tue Nov 19 17:49:31 2024 00:31:29.532 read: IOPS=146, BW=583KiB/s (597kB/s)(1716KiB/2942msec) 00:31:29.532 slat (nsec): min=3701, max=59208, avg=9350.59, stdev=5834.74 00:31:29.532 clat (usec): min=209, max=41241, avg=6797.39, stdev=14979.27 00:31:29.532 lat (usec): min=216, max=41256, avg=6806.71, stdev=14983.84 00:31:29.532 clat percentiles (usec): 00:31:29.532 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:31:29.532 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:31:29.532 | 70.00th=[ 258], 80.00th=[ 293], 90.00th=[41157], 95.00th=[41157], 00:31:29.532 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:29.532 | 99.99th=[41157] 00:31:29.532 bw ( KiB/s): min= 96, max= 2952, per=3.36%, avg=668.80, stdev=1276.35, samples=5 00:31:29.532 iops : min= 24, max= 738, avg=167.20, stdev=319.09, samples=5 00:31:29.532 lat (usec) : 250=63.95%, 500=19.77% 00:31:29.532 lat (msec) : 50=16.05% 00:31:29.532 cpu : usr=0.03%, sys=0.17%, ctx=431, majf=0, minf=2 00:31:29.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.532 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3686263: Tue Nov 19 17:49:31 2024 00:31:29.532 read: IOPS=24, BW=98.1KiB/s (100kB/s)(268KiB/2732msec) 00:31:29.532 slat (nsec): min=9052, max=37945, avg=14583.97, stdev=5102.99 00:31:29.532 clat (usec): min=467, max=43927, avg=40441.01, stdev=4972.05 00:31:29.532 lat (usec): min=505, max=43953, avg=40455.66, stdev=4969.29 00:31:29.532 clat percentiles (usec): 00:31:29.532 | 1.00th=[ 469], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:29.532 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:29.532 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:29.532 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:31:29.532 | 99.99th=[43779] 00:31:29.532 bw ( KiB/s): min= 96, max= 104, per=0.49%, avg=97.60, stdev= 3.58, samples=5 00:31:29.532 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:31:29.532 lat (usec) : 500=1.47% 00:31:29.532 lat (msec) : 50=97.06% 00:31:29.532 cpu : usr=0.07%, sys=0.00%, ctx=68, majf=0, minf=2 00:31:29.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.532 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.532 00:31:29.532 Run status group 0 (all jobs): 00:31:29.532 READ: bw=19.4MiB/s (20.3MB/s), 98.1KiB/s-13.1MiB/s (100kB/s-13.8MB/s), io=64.6MiB (67.8MB), run=2732-3330msec 00:31:29.532 00:31:29.532 Disk stats (read/write): 00:31:29.532 nvme0n1: ios=4880/0, merge=0/0, ticks=3510/0, in_queue=3510, util=98.00% 00:31:29.532 nvme0n2: ios=11223/0, merge=0/0, ticks=3952/0, in_queue=3952, util=97.96% 00:31:29.532 nvme0n3: ios=427/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.52% 00:31:29.532 nvme0n4: ios=64/0, merge=0/0, ticks=2588/0, in_queue=2588, util=96.45% 00:31:29.532 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.532 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:29.790 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:29.790 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:30.047 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:30.047 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:30.305 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:30.305 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3686124 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:30.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:30.563 nvmf hotplug test: fio failed as expected 00:31:30.563 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.821 rmmod nvme_tcp 00:31:30.821 rmmod nvme_fabrics 00:31:30.821 rmmod nvme_keyring 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3683433 ']' 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3683433 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3683433 ']' 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3683433 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.821 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683433 00:31:30.821 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.822 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.822 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683433' 00:31:30.822 killing process with pid 3683433 00:31:30.822 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3683433 00:31:30.822 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3683433 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.081 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.615 00:31:33.615 real 0m26.620s 00:31:33.615 user 1m31.927s 00:31:33.615 sys 0m11.090s 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 ************************************ 00:31:33.615 END TEST nvmf_fio_target 00:31:33.615 ************************************ 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 ************************************ 00:31:33.615 START TEST nvmf_bdevio 00:31:33.615 ************************************ 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:33.615 * Looking for test storage... 00:31:33.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:33.615 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.616 --rc genhtml_branch_coverage=1 00:31:33.616 --rc genhtml_function_coverage=1 00:31:33.616 --rc genhtml_legend=1 00:31:33.616 --rc geninfo_all_blocks=1 00:31:33.616 --rc geninfo_unexecuted_blocks=1 00:31:33.616 00:31:33.616 ' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.616 --rc genhtml_branch_coverage=1 00:31:33.616 --rc genhtml_function_coverage=1 00:31:33.616 --rc genhtml_legend=1 00:31:33.616 --rc geninfo_all_blocks=1 00:31:33.616 --rc geninfo_unexecuted_blocks=1 00:31:33.616 00:31:33.616 ' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.616 --rc genhtml_branch_coverage=1 00:31:33.616 --rc genhtml_function_coverage=1 00:31:33.616 --rc genhtml_legend=1 00:31:33.616 --rc geninfo_all_blocks=1 00:31:33.616 --rc geninfo_unexecuted_blocks=1 00:31:33.616 00:31:33.616 ' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.616 --rc genhtml_branch_coverage=1 00:31:33.616 --rc genhtml_function_coverage=1 00:31:33.616 --rc genhtml_legend=1 00:31:33.616 --rc geninfo_all_blocks=1 00:31:33.616 --rc geninfo_unexecuted_blocks=1 00:31:33.616 00:31:33.616 ' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.616 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.617 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:40.187 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:40.187 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:40.187 Found net devices under 0000:86:00.0: cvl_0_0 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:40.187 Found net devices under 0000:86:00.1: cvl_0_1 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.187 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:31:40.188 00:31:40.188 --- 10.0.0.2 ping statistics --- 00:31:40.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.188 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:40.188 00:31:40.188 --- 10.0.0.1 ping statistics --- 00:31:40.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.188 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3690507 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3690507 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3690507 ']' 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 [2024-11-19 17:49:41.477069] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.188 [2024-11-19 17:49:41.478003] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:31:40.188 [2024-11-19 17:49:41.478036] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.188 [2024-11-19 17:49:41.558350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.188 [2024-11-19 17:49:41.600562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.188 [2024-11-19 17:49:41.600599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.188 [2024-11-19 17:49:41.600606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.188 [2024-11-19 17:49:41.600613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.188 [2024-11-19 17:49:41.600618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.188 [2024-11-19 17:49:41.602193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:40.188 [2024-11-19 17:49:41.602300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.188 [2024-11-19 17:49:41.602211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:40.188 [2024-11-19 17:49:41.602301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:40.188 [2024-11-19 17:49:41.669129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:40.188 [2024-11-19 17:49:41.669151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.188 [2024-11-19 17:49:41.669890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.188 [2024-11-19 17:49:41.669946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:40.188 [2024-11-19 17:49:41.670081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 [2024-11-19 17:49:41.743135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 Malloc0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.188 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.189 [2024-11-19 17:49:41.819196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.189 { 00:31:40.189 "params": { 00:31:40.189 "name": "Nvme$subsystem", 00:31:40.189 "trtype": "$TEST_TRANSPORT", 00:31:40.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.189 "adrfam": "ipv4", 00:31:40.189 "trsvcid": "$NVMF_PORT", 00:31:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.189 "hdgst": ${hdgst:-false}, 00:31:40.189 "ddgst": ${ddgst:-false} 00:31:40.189 }, 00:31:40.189 "method": "bdev_nvme_attach_controller" 00:31:40.189 } 00:31:40.189 EOF 00:31:40.189 )") 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:40.189 17:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.189 "params": { 00:31:40.189 "name": "Nvme1", 00:31:40.189 "trtype": "tcp", 00:31:40.189 "traddr": "10.0.0.2", 00:31:40.189 "adrfam": "ipv4", 00:31:40.189 "trsvcid": "4420", 00:31:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.189 "hdgst": false, 00:31:40.189 "ddgst": false 00:31:40.189 }, 00:31:40.189 "method": "bdev_nvme_attach_controller" 00:31:40.189 }' 00:31:40.189 [2024-11-19 17:49:41.868715] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:31:40.189 [2024-11-19 17:49:41.868758] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690732 ] 00:31:40.189 [2024-11-19 17:49:41.943026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.189 [2024-11-19 17:49:41.986930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.189 [2024-11-19 17:49:41.987045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.189 [2024-11-19 17:49:41.987045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.189 I/O targets: 00:31:40.189 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:40.189 00:31:40.189 00:31:40.189 CUnit - A unit testing framework for C - Version 2.1-3 00:31:40.189 http://cunit.sourceforge.net/ 00:31:40.189 00:31:40.189 00:31:40.189 Suite: bdevio tests on: Nvme1n1 00:31:40.189 Test: blockdev write read block ...passed 00:31:40.189 Test: blockdev write zeroes read block ...passed 00:31:40.189 Test: blockdev write zeroes read no split ...passed 00:31:40.189 Test: blockdev write zeroes read split ...passed 00:31:40.189 Test: blockdev write zeroes read split partial ...passed 00:31:40.189 Test: blockdev reset ...[2024-11-19 17:49:42.285232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:40.189 [2024-11-19 17:49:42.285294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ec340 (9): Bad file descriptor 00:31:40.189 [2024-11-19 17:49:42.329848] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:40.189 passed 00:31:40.189 Test: blockdev write read 8 blocks ...passed 00:31:40.189 Test: blockdev write read size > 128k ...passed 00:31:40.189 Test: blockdev write read invalid size ...passed 00:31:40.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:40.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:40.189 Test: blockdev write read max offset ...passed 00:31:40.446 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:40.446 Test: blockdev writev readv 8 blocks ...passed 00:31:40.446 Test: blockdev writev readv 30 x 1block ...passed 00:31:40.446 Test: blockdev writev readv block ...passed 00:31:40.446 Test: blockdev writev readv size > 128k ...passed 00:31:40.446 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:40.446 Test: blockdev comparev and writev ...[2024-11-19 17:49:42.501202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.446 [2024-11-19 17:49:42.501229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:40.446 [2024-11-19 17:49:42.501243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.446 [2024-11-19 17:49:42.501251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:40.446 [2024-11-19 17:49:42.501549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.446 [2024-11-19 17:49:42.501559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:40.446 [2024-11-19 17:49:42.501570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.446 [2024-11-19 17:49:42.501577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:40.446 [2024-11-19 17:49:42.501872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.446 [2024-11-19 17:49:42.501883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:40.446 [2024-11-19 17:49:42.501894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.446 [2024-11-19 17:49:42.501901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:40.446 [2024-11-19 17:49:42.502198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.447 [2024-11-19 17:49:42.502209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:40.447 [2024-11-19 17:49:42.502220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:40.447 [2024-11-19 17:49:42.502228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:40.447 passed 00:31:40.447 Test: blockdev nvme passthru rw ...passed 00:31:40.447 Test: blockdev nvme passthru vendor specific ...[2024-11-19 17:49:42.584357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:40.447 [2024-11-19 17:49:42.584372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:40.447 [2024-11-19 17:49:42.584483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:40.447 [2024-11-19 17:49:42.584492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:40.447 [2024-11-19 17:49:42.584609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:40.447 [2024-11-19 17:49:42.584618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:40.447 [2024-11-19 17:49:42.584739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:40.447 [2024-11-19 17:49:42.584748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:40.447 passed 00:31:40.447 Test: blockdev nvme admin passthru ...passed 00:31:40.447 Test: blockdev copy ...passed 00:31:40.447 00:31:40.447 Run Summary: Type Total Ran Passed Failed Inactive 00:31:40.447 suites 1 1 n/a 0 0 00:31:40.447 tests 23 23 23 0 0 00:31:40.447 asserts 152 152 152 0 n/a 00:31:40.447 00:31:40.447 Elapsed time = 1.005 seconds 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.704 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.705 rmmod nvme_tcp 00:31:40.705 rmmod nvme_fabrics 00:31:40.705 rmmod nvme_keyring 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3690507 ']' 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3690507 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3690507 ']' 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3690507 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3690507 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3690507' 00:31:40.705 killing process with pid 3690507 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3690507 00:31:40.705 17:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3690507 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.964 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.500 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.500 00:31:43.500 real 0m9.848s 00:31:43.500 user 0m8.119s 00:31:43.500 sys 0m5.167s 00:31:43.500 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.500 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.500 ************************************ 00:31:43.500 END TEST nvmf_bdevio 00:31:43.500 ************************************ 00:31:43.500 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:43.500 00:31:43.500 real 4m34.406s 00:31:43.500 user 9m7.095s 00:31:43.500 sys 1m51.521s 00:31:43.500 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.500 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:43.500 ************************************ 00:31:43.500 END TEST nvmf_target_core_interrupt_mode 00:31:43.500 ************************************ 00:31:43.500 17:49:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:43.500 17:49:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:43.500 17:49:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:43.500 17:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.500 ************************************ 00:31:43.500 START TEST nvmf_interrupt 00:31:43.500 ************************************ 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:43.501 * Looking for test storage... 00:31:43.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.501 --rc genhtml_branch_coverage=1 00:31:43.501 --rc genhtml_function_coverage=1 00:31:43.501 --rc genhtml_legend=1 00:31:43.501 --rc geninfo_all_blocks=1 00:31:43.501 --rc geninfo_unexecuted_blocks=1 00:31:43.501 00:31:43.501 ' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.501 --rc genhtml_branch_coverage=1 00:31:43.501 --rc genhtml_function_coverage=1 00:31:43.501 --rc genhtml_legend=1 00:31:43.501 --rc geninfo_all_blocks=1 00:31:43.501 --rc geninfo_unexecuted_blocks=1 00:31:43.501 00:31:43.501 ' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.501 --rc genhtml_branch_coverage=1 00:31:43.501 --rc genhtml_function_coverage=1 00:31:43.501 --rc genhtml_legend=1 00:31:43.501 --rc geninfo_all_blocks=1 00:31:43.501 --rc geninfo_unexecuted_blocks=1 00:31:43.501 00:31:43.501 ' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.501 --rc genhtml_branch_coverage=1 00:31:43.501 --rc genhtml_function_coverage=1 00:31:43.501 --rc genhtml_legend=1 00:31:43.501 --rc geninfo_all_blocks=1 00:31:43.501 --rc geninfo_unexecuted_blocks=1 00:31:43.501 00:31:43.501 ' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.501 17:49:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:43.502 17:49:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:50.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:50.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:50.072 Found net devices under 0000:86:00.0: cvl_0_0 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:50.072 Found net devices under 0000:86:00.1: cvl_0_1 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.072 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:50.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:31:50.073 00:31:50.073 --- 10.0.0.2 ping statistics --- 00:31:50.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.073 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:31:50.073 00:31:50.073 --- 10.0.0.1 ping statistics --- 00:31:50.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.073 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3694288 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3694288 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3694288 ']' 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.073 17:49:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.073 [2024-11-19 17:49:51.462036] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:50.073 [2024-11-19 17:49:51.462968] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:31:50.073 [2024-11-19 17:49:51.463017] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.073 [2024-11-19 17:49:51.539206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:50.073 [2024-11-19 17:49:51.582505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.073 [2024-11-19 17:49:51.582539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.073 [2024-11-19 17:49:51.582547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.073 [2024-11-19 17:49:51.582553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.073 [2024-11-19 17:49:51.582558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.073 [2024-11-19 17:49:51.583749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.073 [2024-11-19 17:49:51.583751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.073 [2024-11-19 17:49:51.652010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:50.073 [2024-11-19 17:49:51.652612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:50.073 [2024-11-19 17:49:51.652784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:50.332 5000+0 records in 00:31:50.332 5000+0 records out 00:31:50.332 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0181759 s, 563 MB/s 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.332 AIO0 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.332 [2024-11-19 17:49:52.392561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:50.332 [2024-11-19 17:49:52.432825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3694288 0 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3694288 0 idle 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:31:50.332 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694288 root 20 0 128.2g 46848 34560 R 0.0 0.0 0:00.25 reactor_0' 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694288 root 20 0 128.2g 46848 34560 R 0.0 0.0 0:00.25 reactor_0 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:50.591 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3694288 1 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3694288 1 idle 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694292 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694292 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:50.592 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3694552 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3694288 0 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3694288 0 busy 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694288 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0' 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694288 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3694288 1 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3694288 1 busy 00:31:50.850 17:49:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:31:50.850 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694292 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.27 reactor_1' 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694292 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.27 reactor_1 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:51.150 17:49:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3694552 00:32:01.162 Initializing NVMe Controllers 00:32:01.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:01.162 Controller IO queue size 256, less than required. 00:32:01.162 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:01.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:01.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:01.162 Initialization complete. Launching workers. 00:32:01.162 ======================================================== 00:32:01.162 Latency(us) 00:32:01.162 Device Information : IOPS MiB/s Average min max 00:32:01.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16113.20 62.94 15894.63 4340.89 31216.30 00:32:01.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15929.20 62.22 16073.77 8430.34 29882.94 00:32:01.162 ======================================================== 00:32:01.162 Total : 32042.40 125.17 15983.68 4340.89 31216.30 00:32:01.162 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3694288 0 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3694288 0 idle 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:01.162 17:50:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694288 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694288 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:01.162 17:50:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3694288 1 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3694288 1 idle 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694292 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694292 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:01.163 17:50:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:01.733 17:50:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:01.733 17:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:01.733 17:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:01.733 17:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:01.733 17:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3694288 0 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3694288 0 idle 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:32:03.640 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694288 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.48 reactor_0' 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694288 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.48 reactor_0 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3694288 1 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3694288 1 idle 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3694288 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3694288 -w 256 00:32:03.900 17:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3694292 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.09 reactor_1' 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3694292 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.09 reactor_1 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:03.900 17:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:04.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.160 rmmod nvme_tcp 00:32:04.160 rmmod nvme_fabrics 00:32:04.160 rmmod nvme_keyring 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3694288 ']' 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3694288 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3694288 ']' 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3694288 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.160 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3694288 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3694288' 00:32:04.419 killing process with pid 3694288 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3694288 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3694288 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.419 17:50:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.956 17:50:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.956 00:32:06.956 real 0m23.412s 00:32:06.956 user 0m39.660s 00:32:06.956 sys 0m8.551s 00:32:06.956 17:50:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.956 17:50:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.956 ************************************ 00:32:06.956 END TEST nvmf_interrupt 00:32:06.956 ************************************ 00:32:06.956 00:32:06.956 real 27m27.401s 00:32:06.956 user 56m36.171s 00:32:06.956 sys 9m23.213s 00:32:06.956 17:50:08 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.956 17:50:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:06.956 ************************************ 00:32:06.956 END TEST nvmf_tcp 00:32:06.956 ************************************ 00:32:06.956 17:50:08 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:06.956 17:50:08 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:06.956 17:50:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:06.956 17:50:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.956 17:50:08 -- common/autotest_common.sh@10 -- # set +x 00:32:06.956 ************************************ 00:32:06.956 START TEST spdkcli_nvmf_tcp 00:32:06.956 ************************************ 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:06.956 * Looking for test storage... 00:32:06.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.956 --rc genhtml_branch_coverage=1 00:32:06.956 --rc genhtml_function_coverage=1 00:32:06.956 --rc genhtml_legend=1 00:32:06.956 --rc geninfo_all_blocks=1 00:32:06.956 --rc geninfo_unexecuted_blocks=1 00:32:06.956 00:32:06.956 ' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.956 --rc genhtml_branch_coverage=1 00:32:06.956 --rc genhtml_function_coverage=1 00:32:06.956 --rc genhtml_legend=1 00:32:06.956 --rc geninfo_all_blocks=1 00:32:06.956 --rc geninfo_unexecuted_blocks=1 00:32:06.956 00:32:06.956 ' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.956 --rc genhtml_branch_coverage=1 00:32:06.956 --rc genhtml_function_coverage=1 00:32:06.956 --rc genhtml_legend=1 00:32:06.956 --rc geninfo_all_blocks=1 00:32:06.956 --rc geninfo_unexecuted_blocks=1 00:32:06.956 00:32:06.956 ' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.956 --rc genhtml_branch_coverage=1 00:32:06.956 --rc genhtml_function_coverage=1 00:32:06.956 --rc genhtml_legend=1 00:32:06.956 --rc geninfo_all_blocks=1 00:32:06.956 --rc geninfo_unexecuted_blocks=1 00:32:06.956 00:32:06.956 ' 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.956 17:50:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:06.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3697242 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3697242 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3697242 ']' 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.956 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:06.956 [2024-11-19 17:50:09.070531] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:32:06.956 [2024-11-19 17:50:09.070579] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3697242 ] 00:32:06.956 [2024-11-19 17:50:09.146678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:07.214 [2024-11-19 17:50:09.189502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.214 [2024-11-19 17:50:09.189502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:07.214 17:50:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:07.214 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:07.214 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:07.214 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:07.214 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:07.214 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:07.214 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:07.214 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:07.214 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:07.214 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:07.214 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:07.215 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.215 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:07.215 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:07.215 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:07.215 ' 00:32:10.500 [2024-11-19 17:50:12.026543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.434 [2024-11-19 17:50:13.366974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:13.966 [2024-11-19 17:50:15.854586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:15.879 [2024-11-19 17:50:18.017361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:17.780 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:17.780 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:17.780 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:17.780 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:17.780 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:17.780 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:17.780 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:17.780 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:17.780 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:17.780 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:17.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:17.780 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:17.780 17:50:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:18.038 17:50:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.297 17:50:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:18.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:18.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:18.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:18.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:18.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:18.297 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:18.297 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:18.297 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:18.297 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:18.297 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:18.297 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:18.297 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:18.297 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:18.297 ' 00:32:24.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:24.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:24.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:24.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:24.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:24.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:24.863 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:24.863 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:24.863 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:24.863 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:24.863 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:24.863 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:24.863 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:24.863 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3697242 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3697242 ']' 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3697242 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.863 17:50:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3697242 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3697242' 00:32:24.863 killing process with pid 3697242 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3697242 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3697242 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3697242 ']' 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3697242 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3697242 ']' 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3697242 00:32:24.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3697242) - No such process 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3697242 is not found' 00:32:24.863 Process with pid 3697242 is not found 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:24.863 00:32:24.863 real 0m17.357s 00:32:24.863 user 0m38.279s 00:32:24.863 sys 0m0.808s 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.863 17:50:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.863 ************************************ 00:32:24.863 END TEST spdkcli_nvmf_tcp 00:32:24.863 ************************************ 00:32:24.863 17:50:26 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:24.864 17:50:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:24.864 17:50:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.864 17:50:26 -- common/autotest_common.sh@10 -- # set +x 00:32:24.864 ************************************ 00:32:24.864 START TEST nvmf_identify_passthru 00:32:24.864 ************************************ 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:24.864 * Looking for test storage... 00:32:24.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.864 --rc genhtml_branch_coverage=1 00:32:24.864 --rc genhtml_function_coverage=1 00:32:24.864 --rc genhtml_legend=1 00:32:24.864 --rc geninfo_all_blocks=1 00:32:24.864 --rc geninfo_unexecuted_blocks=1 00:32:24.864 00:32:24.864 ' 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.864 --rc genhtml_branch_coverage=1 00:32:24.864 --rc genhtml_function_coverage=1 00:32:24.864 --rc genhtml_legend=1 00:32:24.864 --rc geninfo_all_blocks=1 00:32:24.864 --rc geninfo_unexecuted_blocks=1 00:32:24.864 00:32:24.864 ' 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.864 --rc genhtml_branch_coverage=1 00:32:24.864 --rc genhtml_function_coverage=1 00:32:24.864 --rc genhtml_legend=1 00:32:24.864 --rc geninfo_all_blocks=1 00:32:24.864 --rc geninfo_unexecuted_blocks=1 00:32:24.864 00:32:24.864 ' 00:32:24.864 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.864 --rc genhtml_branch_coverage=1 00:32:24.864 --rc genhtml_function_coverage=1 00:32:24.864 --rc genhtml_legend=1 00:32:24.864 --rc geninfo_all_blocks=1 00:32:24.864 --rc geninfo_unexecuted_blocks=1 00:32:24.864 00:32:24.864 ' 00:32:24.864 17:50:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.864 17:50:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.864 17:50:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.864 17:50:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.864 17:50:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:24.864 17:50:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:24.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:24.864 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:24.864 17:50:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.864 17:50:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.865 17:50:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.865 17:50:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.865 17:50:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.865 17:50:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:24.865 17:50:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.865 17:50:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.865 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:24.865 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:24.865 17:50:26 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:24.865 17:50:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.140 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:30.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:30.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:30.141 Found net devices under 0000:86:00.0: cvl_0_0 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:30.141 Found net devices under 0000:86:00.1: cvl_0_1 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:32:30.141 00:32:30.141 --- 10.0.0.2 ping statistics --- 00:32:30.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.141 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:30.141 00:32:30.141 --- 10.0.0.1 ping statistics --- 00:32:30.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.141 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.141 17:50:32 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.141 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:30.141 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:30.141 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:30.400 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:30.400 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:30.400 17:50:32 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:30.400 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:30.400 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:30.400 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:30.400 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:30.400 17:50:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:34.591 17:50:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:34.591 17:50:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:34.591 17:50:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:34.591 17:50:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:38.783 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:38.783 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:38.783 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.783 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:38.783 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:38.783 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.783 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:38.783 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3704496 00:32:38.783 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:38.783 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:38.784 17:50:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3704496 00:32:38.784 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3704496 ']' 00:32:38.784 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.784 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.784 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.784 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.784 17:50:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:38.784 [2024-11-19 17:50:40.898002] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:32:38.784 [2024-11-19 17:50:40.898049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.784 [2024-11-19 17:50:40.977467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.043 [2024-11-19 17:50:41.021140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.043 [2024-11-19 17:50:41.021181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.043 [2024-11-19 17:50:41.021189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.043 [2024-11-19 17:50:41.021195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.043 [2024-11-19 17:50:41.021200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.043 [2024-11-19 17:50:41.022660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.043 [2024-11-19 17:50:41.022768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.043 [2024-11-19 17:50:41.022794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.043 [2024-11-19 17:50:41.022795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:39.043 17:50:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 INFO: Log level set to 20 00:32:39.043 INFO: Requests: 00:32:39.043 { 00:32:39.043 "jsonrpc": "2.0", 00:32:39.043 "method": "nvmf_set_config", 00:32:39.043 "id": 1, 00:32:39.043 "params": { 00:32:39.043 "admin_cmd_passthru": { 00:32:39.043 "identify_ctrlr": true 00:32:39.043 } 00:32:39.043 } 00:32:39.043 } 00:32:39.043 00:32:39.043 INFO: response: 00:32:39.043 { 00:32:39.043 "jsonrpc": "2.0", 00:32:39.043 "id": 1, 00:32:39.043 "result": true 00:32:39.043 } 00:32:39.043 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.043 17:50:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 INFO: Setting log level to 20 00:32:39.043 INFO: Setting log level to 20 00:32:39.043 INFO: Log level set to 20 00:32:39.043 INFO: Log level set to 20 00:32:39.043 INFO: Requests: 00:32:39.043 { 00:32:39.043 "jsonrpc": "2.0", 00:32:39.043 "method": "framework_start_init", 00:32:39.043 "id": 1 00:32:39.043 } 00:32:39.043 00:32:39.043 INFO: Requests: 00:32:39.043 { 00:32:39.043 "jsonrpc": "2.0", 00:32:39.043 "method": "framework_start_init", 00:32:39.043 "id": 1 00:32:39.043 } 00:32:39.043 00:32:39.043 [2024-11-19 17:50:41.134980] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:39.043 INFO: response: 00:32:39.043 { 00:32:39.043 "jsonrpc": "2.0", 00:32:39.043 "id": 1, 00:32:39.043 "result": true 00:32:39.043 } 00:32:39.043 00:32:39.043 INFO: response: 00:32:39.043 { 00:32:39.043 "jsonrpc": "2.0", 00:32:39.043 "id": 1, 00:32:39.043 "result": true 00:32:39.043 } 00:32:39.043 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.043 17:50:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 INFO: Setting log level to 40 00:32:39.043 INFO: Setting log level to 40 00:32:39.043 INFO: Setting log level to 40 00:32:39.043 [2024-11-19 17:50:41.148327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.043 17:50:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.043 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.043 17:50:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:39.044 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.044 17:50:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.335 Nvme0n1 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.335 [2024-11-19 17:50:44.066592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.335 [ 00:32:42.335 { 00:32:42.335 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:42.335 "subtype": "Discovery", 00:32:42.335 "listen_addresses": [], 00:32:42.335 "allow_any_host": true, 00:32:42.335 "hosts": [] 00:32:42.335 }, 00:32:42.335 { 00:32:42.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.335 "subtype": "NVMe", 00:32:42.335 "listen_addresses": [ 00:32:42.335 { 00:32:42.335 "trtype": "TCP", 00:32:42.335 "adrfam": "IPv4", 00:32:42.335 "traddr": "10.0.0.2", 00:32:42.335 "trsvcid": "4420" 00:32:42.335 } 00:32:42.335 ], 00:32:42.335 "allow_any_host": true, 00:32:42.335 "hosts": [], 00:32:42.335 "serial_number": "SPDK00000000000001", 00:32:42.335 "model_number": "SPDK bdev Controller", 00:32:42.335 "max_namespaces": 1, 00:32:42.335 "min_cntlid": 1, 00:32:42.335 "max_cntlid": 65519, 00:32:42.335 "namespaces": [ 00:32:42.335 { 00:32:42.335 "nsid": 1, 00:32:42.335 "bdev_name": "Nvme0n1", 00:32:42.335 "name": "Nvme0n1", 00:32:42.335 "nguid": "14840BC880CF4FD8903459E4E7DA614F", 00:32:42.335 "uuid": "14840bc8-80cf-4fd8-9034-59e4e7da614f" 00:32:42.335 } 00:32:42.335 ] 00:32:42.335 } 00:32:42.335 ] 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:42.335 17:50:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.335 rmmod nvme_tcp 00:32:42.335 rmmod nvme_fabrics 00:32:42.335 rmmod nvme_keyring 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3704496 ']' 00:32:42.335 17:50:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3704496 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3704496 ']' 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3704496 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.335 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704496 00:32:42.595 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.595 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.595 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704496' 00:32:42.595 killing process with pid 3704496 00:32:42.595 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3704496 00:32:42.595 17:50:44 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3704496 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.973 17:50:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.973 17:50:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:43.973 17:50:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.880 17:50:48 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.880 00:32:45.880 real 0m21.848s 00:32:45.880 user 0m26.712s 00:32:45.880 sys 0m6.143s 00:32:45.880 17:50:48 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.880 17:50:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:45.880 ************************************ 00:32:45.880 END TEST nvmf_identify_passthru 00:32:45.880 ************************************ 00:32:46.139 17:50:48 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:46.139 17:50:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:46.139 17:50:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.139 17:50:48 -- common/autotest_common.sh@10 -- # set +x 00:32:46.139 ************************************ 00:32:46.139 START TEST nvmf_dif 00:32:46.139 ************************************ 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:46.139 * Looking for test storage... 00:32:46.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.139 17:50:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.139 --rc genhtml_branch_coverage=1 00:32:46.139 --rc genhtml_function_coverage=1 00:32:46.139 --rc genhtml_legend=1 00:32:46.139 --rc geninfo_all_blocks=1 00:32:46.139 --rc geninfo_unexecuted_blocks=1 00:32:46.139 00:32:46.139 ' 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.139 --rc genhtml_branch_coverage=1 00:32:46.139 --rc genhtml_function_coverage=1 00:32:46.139 --rc genhtml_legend=1 00:32:46.139 --rc geninfo_all_blocks=1 00:32:46.139 --rc geninfo_unexecuted_blocks=1 00:32:46.139 00:32:46.139 ' 00:32:46.139 17:50:48 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.139 --rc genhtml_branch_coverage=1 00:32:46.139 --rc genhtml_function_coverage=1 00:32:46.139 --rc genhtml_legend=1 00:32:46.140 --rc geninfo_all_blocks=1 00:32:46.140 --rc geninfo_unexecuted_blocks=1 00:32:46.140 00:32:46.140 ' 00:32:46.140 17:50:48 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:46.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.140 --rc genhtml_branch_coverage=1 00:32:46.140 --rc genhtml_function_coverage=1 00:32:46.140 --rc genhtml_legend=1 00:32:46.140 --rc geninfo_all_blocks=1 00:32:46.140 --rc geninfo_unexecuted_blocks=1 00:32:46.140 00:32:46.140 ' 00:32:46.140 17:50:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.140 17:50:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.400 17:50:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.400 17:50:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.400 17:50:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.400 17:50:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.400 17:50:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.400 17:50:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.400 17:50:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.400 17:50:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:46.400 17:50:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:46.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.400 17:50:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:46.400 17:50:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:46.400 17:50:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:46.400 17:50:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:46.400 17:50:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.400 17:50:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:46.400 17:50:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:46.400 17:50:48 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.400 17:50:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:52.972 17:50:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.972 17:50:53 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.972 17:50:53 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.972 17:50:53 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.972 17:50:53 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:52.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:52.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:52.973 Found net devices under 0000:86:00.0: cvl_0_0 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:52.973 Found net devices under 0000:86:00.1: cvl_0_1 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.973 17:50:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:32:52.973 00:32:52.973 --- 10.0.0.2 ping statistics --- 00:32:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.973 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:52.973 00:32:52.973 --- 10.0.0.1 ping statistics --- 00:32:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.973 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:52.973 17:50:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:54.879 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:54.879 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:54.879 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:54.879 17:50:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:54.879 17:50:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:54.879 17:50:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:54.879 17:50:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:54.879 17:50:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:54.879 17:50:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.138 17:50:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:55.138 17:50:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:55.138 17:50:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:55.138 17:50:57 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3709987 00:32:55.138 17:50:57 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:55.138 17:50:57 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3709987 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3709987 ']' 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.138 17:50:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:55.138 [2024-11-19 17:50:57.159650] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:32:55.138 [2024-11-19 17:50:57.159698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.138 [2024-11-19 17:50:57.238473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.138 [2024-11-19 17:50:57.280206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.138 [2024-11-19 17:50:57.280242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.138 [2024-11-19 17:50:57.280248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.138 [2024-11-19 17:50:57.280254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.138 [2024-11-19 17:50:57.280259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.138 [2024-11-19 17:50:57.280834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:55.397 17:50:57 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 17:50:57 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.397 17:50:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:55.397 17:50:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 [2024-11-19 17:50:57.417129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.397 17:50:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 ************************************ 00:32:55.397 START TEST fio_dif_1_default 00:32:55.397 ************************************ 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 bdev_null0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:55.397 [2024-11-19 17:50:57.489448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:55.397 { 00:32:55.397 "params": { 00:32:55.397 "name": "Nvme$subsystem", 00:32:55.397 "trtype": "$TEST_TRANSPORT", 00:32:55.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:55.397 "adrfam": "ipv4", 00:32:55.397 "trsvcid": "$NVMF_PORT", 00:32:55.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:55.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:55.397 "hdgst": ${hdgst:-false}, 00:32:55.397 "ddgst": ${ddgst:-false} 00:32:55.397 }, 00:32:55.397 "method": "bdev_nvme_attach_controller" 00:32:55.397 } 00:32:55.397 EOF 00:32:55.397 )") 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:55.397 "params": { 00:32:55.397 "name": "Nvme0", 00:32:55.397 "trtype": "tcp", 00:32:55.397 "traddr": "10.0.0.2", 00:32:55.397 "adrfam": "ipv4", 00:32:55.397 "trsvcid": "4420", 00:32:55.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:55.397 "hdgst": false, 00:32:55.397 "ddgst": false 00:32:55.397 }, 00:32:55.397 "method": "bdev_nvme_attach_controller" 00:32:55.397 }' 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:55.397 17:50:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:55.656 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:55.656 fio-3.35 00:32:55.656 Starting 1 thread 00:33:07.867 00:33:07.867 filename0: (groupid=0, jobs=1): err= 0: pid=3710339: Tue Nov 19 17:51:08 2024 00:33:07.867 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:33:07.867 slat (nsec): min=5856, max=26024, avg=6206.93, stdev=798.62 00:33:07.867 clat (usec): min=40821, max=43803, avg=41018.83, stdev=242.18 00:33:07.867 lat (usec): min=40827, max=43829, avg=41025.04, stdev=242.54 00:33:07.867 clat percentiles (usec): 00:33:07.867 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:07.867 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:07.867 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:07.867 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:07.867 | 99.99th=[43779] 00:33:07.867 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:33:07.867 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:07.867 lat (msec) : 50=100.00% 00:33:07.867 cpu : usr=91.97%, sys=7.76%, ctx=13, majf=0, minf=0 00:33:07.867 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.867 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.867 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:07.867 00:33:07.867 Run status group 0 (all jobs): 00:33:07.867 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 00:33:07.867 real 0m11.234s 00:33:07.867 user 0m16.480s 00:33:07.867 sys 0m1.075s 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 ************************************ 00:33:07.867 END TEST fio_dif_1_default 00:33:07.867 ************************************ 00:33:07.867 17:51:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:07.867 17:51:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:07.867 17:51:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 ************************************ 00:33:07.867 START TEST fio_dif_1_multi_subsystems 00:33:07.867 ************************************ 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 bdev_null0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 [2024-11-19 17:51:08.799163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 bdev_null1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.867 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.868 { 00:33:07.868 "params": { 00:33:07.868 "name": "Nvme$subsystem", 00:33:07.868 "trtype": "$TEST_TRANSPORT", 00:33:07.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.868 "adrfam": "ipv4", 00:33:07.868 "trsvcid": "$NVMF_PORT", 00:33:07.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.868 "hdgst": ${hdgst:-false}, 00:33:07.868 "ddgst": ${ddgst:-false} 00:33:07.868 }, 00:33:07.868 "method": "bdev_nvme_attach_controller" 00:33:07.868 } 00:33:07.868 EOF 00:33:07.868 )") 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.868 { 00:33:07.868 "params": { 00:33:07.868 "name": "Nvme$subsystem", 00:33:07.868 "trtype": "$TEST_TRANSPORT", 00:33:07.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.868 "adrfam": "ipv4", 00:33:07.868 "trsvcid": "$NVMF_PORT", 00:33:07.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.868 "hdgst": ${hdgst:-false}, 00:33:07.868 "ddgst": ${ddgst:-false} 00:33:07.868 }, 00:33:07.868 "method": "bdev_nvme_attach_controller" 00:33:07.868 } 00:33:07.868 EOF 00:33:07.868 )") 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:07.868 "params": { 00:33:07.868 "name": "Nvme0", 00:33:07.868 "trtype": "tcp", 00:33:07.868 "traddr": "10.0.0.2", 00:33:07.868 "adrfam": "ipv4", 00:33:07.868 "trsvcid": "4420", 00:33:07.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:07.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:07.868 "hdgst": false, 00:33:07.868 "ddgst": false 00:33:07.868 }, 00:33:07.868 "method": "bdev_nvme_attach_controller" 00:33:07.868 },{ 00:33:07.868 "params": { 00:33:07.868 "name": "Nvme1", 00:33:07.868 "trtype": "tcp", 00:33:07.868 "traddr": "10.0.0.2", 00:33:07.868 "adrfam": "ipv4", 00:33:07.868 "trsvcid": "4420", 00:33:07.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.868 "hdgst": false, 00:33:07.868 "ddgst": false 00:33:07.868 }, 00:33:07.868 "method": "bdev_nvme_attach_controller" 00:33:07.868 }' 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:07.868 17:51:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.868 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:07.868 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:07.868 fio-3.35 00:33:07.868 Starting 2 threads 00:33:20.081 00:33:20.081 filename0: (groupid=0, jobs=1): err= 0: pid=3712826: Tue Nov 19 17:51:20 2024 00:33:20.081 read: IOPS=195, BW=783KiB/s (802kB/s)(7856KiB/10031msec) 00:33:20.081 slat (nsec): min=5942, max=31461, avg=7438.71, stdev=2472.01 00:33:20.081 clat (usec): min=390, max=42579, avg=20407.26, stdev=20447.91 00:33:20.081 lat (usec): min=396, max=42585, avg=20414.69, stdev=20447.18 00:33:20.081 clat percentiles (usec): 00:33:20.081 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 429], 00:33:20.081 | 30.00th=[ 449], 40.00th=[ 603], 50.00th=[ 914], 60.00th=[41157], 00:33:20.081 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:20.081 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:20.081 | 99.99th=[42730] 00:33:20.081 bw ( KiB/s): min= 704, max= 832, per=50.36%, avg=784.00, stdev=35.21, samples=20 00:33:20.081 iops : min= 176, max= 208, avg=196.00, stdev= 8.80, samples=20 00:33:20.081 lat (usec) : 500=34.01%, 750=15.68%, 1000=0.87% 00:33:20.081 lat (msec) : 2=0.76%, 50=48.68% 00:33:20.081 cpu : usr=96.96%, sys=2.79%, ctx=13, majf=0, minf=57 00:33:20.081 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:20.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.081 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.081 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:20.081 filename1: (groupid=0, jobs=1): err= 0: pid=3712827: Tue Nov 19 17:51:20 2024 00:33:20.081 read: IOPS=193, BW=775KiB/s (794kB/s)(7760KiB/10012msec) 00:33:20.081 slat (nsec): min=6037, max=27143, avg=7289.73, stdev=2161.40 00:33:20.081 clat (usec): min=392, max=42593, avg=20621.42, stdev=20422.95 00:33:20.081 lat (usec): min=399, max=42600, avg=20628.71, stdev=20422.27 00:33:20.081 clat percentiles (usec): 00:33:20.081 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 429], 00:33:20.081 | 30.00th=[ 441], 40.00th=[ 603], 50.00th=[ 979], 60.00th=[40633], 00:33:20.081 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:20.081 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:20.081 | 99.99th=[42730] 00:33:20.081 bw ( KiB/s): min= 704, max= 832, per=49.72%, avg=774.40, stdev=28.62, samples=20 00:33:20.081 iops : min= 176, max= 208, avg=193.60, stdev= 7.16, samples=20 00:33:20.081 lat (usec) : 500=33.92%, 750=14.74%, 1000=1.49% 00:33:20.081 lat (msec) : 2=0.57%, 50=49.28% 00:33:20.081 cpu : usr=96.95%, sys=2.80%, ctx=14, majf=0, minf=101 00:33:20.081 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:20.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.081 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.081 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:20.081 00:33:20.081 Run status group 0 (all jobs): 00:33:20.081 READ: bw=1557KiB/s (1594kB/s), 775KiB/s-783KiB/s (794kB/s-802kB/s), io=15.2MiB (16.0MB), run=10012-10031msec 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 00:33:20.081 real 0m11.604s 00:33:20.081 user 0m26.564s 00:33:20.081 sys 0m0.876s 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 ************************************ 00:33:20.081 END TEST fio_dif_1_multi_subsystems 00:33:20.081 ************************************ 00:33:20.081 17:51:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:20.081 17:51:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:20.081 17:51:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 ************************************ 00:33:20.081 START TEST fio_dif_rand_params 00:33:20.081 ************************************ 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 bdev_null0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:20.081 [2024-11-19 17:51:20.477709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:20.081 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.081 { 00:33:20.081 "params": { 00:33:20.081 "name": "Nvme$subsystem", 00:33:20.081 "trtype": "$TEST_TRANSPORT", 00:33:20.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.081 "adrfam": "ipv4", 00:33:20.081 "trsvcid": "$NVMF_PORT", 00:33:20.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.082 "hdgst": ${hdgst:-false}, 00:33:20.082 "ddgst": ${ddgst:-false} 00:33:20.082 }, 00:33:20.082 "method": "bdev_nvme_attach_controller" 00:33:20.082 } 00:33:20.082 EOF 00:33:20.082 )") 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:20.082 "params": { 00:33:20.082 "name": "Nvme0", 00:33:20.082 "trtype": "tcp", 00:33:20.082 "traddr": "10.0.0.2", 00:33:20.082 "adrfam": "ipv4", 00:33:20.082 "trsvcid": "4420", 00:33:20.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.082 "hdgst": false, 00:33:20.082 "ddgst": false 00:33:20.082 }, 00:33:20.082 "method": "bdev_nvme_attach_controller" 00:33:20.082 }' 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:20.082 17:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.082 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:20.082 ... 00:33:20.082 fio-3.35 00:33:20.082 Starting 3 threads 00:33:24.403 00:33:24.403 filename0: (groupid=0, jobs=1): err= 0: pid=3714788: Tue Nov 19 17:51:26 2024 00:33:24.403 read: IOPS=335, BW=41.9MiB/s (44.0MB/s)(210MiB/5007msec) 00:33:24.403 slat (nsec): min=6250, max=29016, avg=10595.82, stdev=2025.82 00:33:24.403 clat (usec): min=3289, max=50118, avg=8927.86, stdev=5168.50 00:33:24.403 lat (usec): min=3296, max=50130, avg=8938.46, stdev=5168.50 00:33:24.403 clat percentiles (usec): 00:33:24.403 | 1.00th=[ 4359], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 7177], 00:33:24.403 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:24.403 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10683], 00:33:24.403 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:33:24.403 | 99.99th=[50070] 00:33:24.403 bw ( KiB/s): min=32256, max=47360, per=35.93%, avg=42931.20, stdev=4584.31, samples=10 00:33:24.403 iops : min= 252, max= 370, avg=335.40, stdev=35.81, samples=10 00:33:24.403 lat (msec) : 4=0.77%, 10=88.45%, 20=9.17%, 50=1.49%, 100=0.12% 00:33:24.403 cpu : usr=93.99%, sys=5.69%, ctx=8, majf=0, minf=9 00:33:24.403 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.403 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.403 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:24.403 filename0: (groupid=0, jobs=1): err= 0: pid=3714789: Tue Nov 19 17:51:26 2024 00:33:24.403 read: IOPS=325, BW=40.7MiB/s (42.7MB/s)(205MiB/5045msec) 00:33:24.403 slat (nsec): min=6214, max=28571, avg=10646.41, stdev=2008.59 00:33:24.403 clat (usec): min=3341, max=52319, avg=9178.70, stdev=4475.82 00:33:24.403 lat (usec): min=3348, max=52339, avg=9189.35, stdev=4476.00 00:33:24.403 clat percentiles (usec): 00:33:24.403 | 1.00th=[ 3851], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6980], 00:33:24.403 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:33:24.403 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11076], 95.00th=[11731], 00:33:24.403 | 99.00th=[45876], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:33:24.403 | 99.99th=[52167] 00:33:24.403 bw ( KiB/s): min=35584, max=47104, per=35.11%, avg=41958.40, stdev=3377.84, samples=10 00:33:24.403 iops : min= 278, max= 368, avg=327.80, stdev=26.39, samples=10 00:33:24.403 lat (msec) : 4=1.64%, 10=69.91%, 20=27.41%, 50=0.73%, 100=0.30% 00:33:24.403 cpu : usr=94.11%, sys=5.59%, ctx=6, majf=0, minf=9 00:33:24.403 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.403 issued rwts: total=1642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.403 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:24.403 filename0: (groupid=0, jobs=1): err= 0: pid=3714790: Tue Nov 19 17:51:26 2024 00:33:24.403 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(174MiB/5003msec) 00:33:24.403 slat (nsec): min=6192, max=24660, avg=10685.82, stdev=2058.94 00:33:24.403 clat (usec): min=3682, max=53222, avg=10799.56, stdev=7209.66 00:33:24.403 lat (usec): min=3689, max=53235, avg=10810.24, stdev=7209.63 00:33:24.403 clat percentiles (usec): 00:33:24.403 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 8291], 00:33:24.403 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:33:24.403 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11863], 95.00th=[12649], 00:33:24.403 | 99.00th=[49546], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:33:24.403 | 99.99th=[53216] 00:33:24.403 bw ( KiB/s): min=22528, max=40704, per=29.69%, avg=35481.60, stdev=5118.86, samples=10 00:33:24.403 iops : min= 176, max= 318, avg=277.20, stdev=39.99, samples=10 00:33:24.403 lat (msec) : 4=0.43%, 10=56.70%, 20=39.63%, 50=2.59%, 100=0.65% 00:33:24.403 cpu : usr=93.88%, sys=5.74%, ctx=8, majf=0, minf=10 00:33:24.403 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.404 issued rwts: total=1388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.404 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:24.404 00:33:24.404 Run status group 0 (all jobs): 00:33:24.404 READ: bw=117MiB/s (122MB/s), 34.7MiB/s-41.9MiB/s (36.4MB/s-44.0MB/s), io=589MiB (617MB), run=5003-5045msec 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.404 bdev_null0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.404 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 [2024-11-19 17:51:26.629184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 bdev_null1 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 bdev_null2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.664 { 00:33:24.664 "params": { 00:33:24.664 "name": "Nvme$subsystem", 00:33:24.664 "trtype": "$TEST_TRANSPORT", 00:33:24.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.664 "adrfam": "ipv4", 00:33:24.664 "trsvcid": "$NVMF_PORT", 00:33:24.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.664 "hdgst": ${hdgst:-false}, 00:33:24.664 "ddgst": ${ddgst:-false} 00:33:24.664 }, 00:33:24.664 "method": "bdev_nvme_attach_controller" 00:33:24.664 } 00:33:24.664 EOF 00:33:24.664 )") 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.664 { 00:33:24.664 "params": { 00:33:24.664 "name": "Nvme$subsystem", 00:33:24.664 "trtype": "$TEST_TRANSPORT", 00:33:24.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.664 "adrfam": "ipv4", 00:33:24.664 "trsvcid": "$NVMF_PORT", 00:33:24.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.664 "hdgst": ${hdgst:-false}, 00:33:24.664 "ddgst": ${ddgst:-false} 00:33:24.664 }, 00:33:24.664 "method": "bdev_nvme_attach_controller" 00:33:24.664 } 00:33:24.664 EOF 00:33:24.664 )") 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.664 { 00:33:24.664 "params": { 00:33:24.664 "name": "Nvme$subsystem", 00:33:24.664 "trtype": "$TEST_TRANSPORT", 00:33:24.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.664 "adrfam": "ipv4", 00:33:24.664 "trsvcid": "$NVMF_PORT", 00:33:24.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.664 "hdgst": ${hdgst:-false}, 00:33:24.664 "ddgst": ${ddgst:-false} 00:33:24.664 }, 00:33:24.664 "method": "bdev_nvme_attach_controller" 00:33:24.664 } 00:33:24.664 EOF 00:33:24.664 )") 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:24.664 17:51:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.664 "params": { 00:33:24.664 "name": "Nvme0", 00:33:24.665 "trtype": "tcp", 00:33:24.665 "traddr": "10.0.0.2", 00:33:24.665 "adrfam": "ipv4", 00:33:24.665 "trsvcid": "4420", 00:33:24.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.665 "hdgst": false, 00:33:24.665 "ddgst": false 00:33:24.665 }, 00:33:24.665 "method": "bdev_nvme_attach_controller" 00:33:24.665 },{ 00:33:24.665 "params": { 00:33:24.665 "name": "Nvme1", 00:33:24.665 "trtype": "tcp", 00:33:24.665 "traddr": "10.0.0.2", 00:33:24.665 "adrfam": "ipv4", 00:33:24.665 "trsvcid": "4420", 00:33:24.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:24.665 "hdgst": false, 00:33:24.665 "ddgst": false 00:33:24.665 }, 00:33:24.665 "method": "bdev_nvme_attach_controller" 00:33:24.665 },{ 00:33:24.665 "params": { 00:33:24.665 "name": "Nvme2", 00:33:24.665 "trtype": "tcp", 00:33:24.665 "traddr": "10.0.0.2", 00:33:24.665 "adrfam": "ipv4", 00:33:24.665 "trsvcid": "4420", 00:33:24.665 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:24.665 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:24.665 "hdgst": false, 00:33:24.665 "ddgst": false 00:33:24.665 }, 00:33:24.665 "method": "bdev_nvme_attach_controller" 00:33:24.665 }' 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:24.665 17:51:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.924 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:24.924 ... 00:33:24.924 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:24.924 ... 00:33:24.924 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:24.924 ... 00:33:24.924 fio-3.35 00:33:24.924 Starting 24 threads 00:33:37.150 00:33:37.150 filename0: (groupid=0, jobs=1): err= 0: pid=3715983: Tue Nov 19 17:51:37 2024 00:33:37.150 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10010msec) 00:33:37.150 slat (nsec): min=6844, max=66334, avg=16419.44, stdev=5969.32 00:33:37.150 clat (usec): min=9806, max=31266, avg=27821.99, stdev=1530.30 00:33:37.150 lat (usec): min=9832, max=31280, avg=27838.41, stdev=1529.40 00:33:37.150 clat percentiles (usec): 00:33:37.150 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:37.150 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.150 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:37.150 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30540], 99.95th=[30540], 00:33:37.150 | 99.99th=[31327] 00:33:37.150 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2284.80, stdev=62.64, samples=20 00:33:37.150 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:37.150 lat (msec) : 10=0.12%, 20=1.00%, 50=98.88% 00:33:37.150 cpu : usr=98.65%, sys=0.97%, ctx=11, majf=0, minf=9 00:33:37.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.150 filename0: (groupid=0, jobs=1): err= 0: pid=3715984: Tue Nov 19 17:51:37 2024 00:33:37.150 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:37.150 slat (usec): min=7, max=123, avg=55.05, stdev=26.90 00:33:37.150 clat (usec): min=10621, max=48458, avg=27538.45, stdev=1577.21 00:33:37.150 lat (usec): min=10628, max=48471, avg=27593.51, stdev=1579.99 00:33:37.150 clat percentiles (usec): 00:33:37.150 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.150 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:33:37.150 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.150 | 99.00th=[28705], 99.50th=[29754], 99.90th=[48497], 99.95th=[48497], 00:33:37.150 | 99.99th=[48497] 00:33:37.150 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.53, stdev=57.55, samples=19 00:33:37.150 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:37.150 lat (msec) : 20=0.56%, 50=99.44% 00:33:37.150 cpu : usr=98.83%, sys=0.80%, ctx=10, majf=0, minf=9 00:33:37.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.150 filename0: (groupid=0, jobs=1): err= 0: pid=3715985: Tue Nov 19 17:51:37 2024 00:33:37.150 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10003msec) 00:33:37.150 slat (nsec): min=6704, max=64390, avg=12935.34, stdev=6024.19 00:33:37.150 clat (usec): min=9732, max=63745, avg=28015.08, stdev=2108.96 00:33:37.150 lat (usec): min=9754, max=63764, avg=28028.01, stdev=2108.56 00:33:37.150 clat percentiles (usec): 00:33:37.150 | 1.00th=[24249], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:37.150 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.150 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:37.150 | 99.00th=[29754], 99.50th=[43254], 99.90th=[47973], 99.95th=[47973], 00:33:37.150 | 99.99th=[63701] 00:33:37.150 bw ( KiB/s): min= 2100, max= 2304, per=4.16%, avg=2266.32, stdev=62.10, samples=19 00:33:37.150 iops : min= 525, max= 576, avg=566.58, stdev=15.53, samples=19 00:33:37.150 lat (msec) : 10=0.14%, 20=0.70%, 50=99.12%, 100=0.04% 00:33:37.150 cpu : usr=98.43%, sys=1.20%, ctx=15, majf=0, minf=9 00:33:37.150 IO depths : 1=1.1%, 2=7.0%, 4=23.7%, 8=56.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:33:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 complete : 0=0.0%, 4=94.1%, 8=0.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 issued rwts: total=5692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.150 filename0: (groupid=0, jobs=1): err= 0: pid=3715986: Tue Nov 19 17:51:37 2024 00:33:37.150 read: IOPS=568, BW=2275KiB/s (2329kB/s)(22.2MiB/10002msec) 00:33:37.150 slat (usec): min=6, max=120, avg=53.40, stdev=26.64 00:33:37.150 clat (usec): min=10623, max=73809, avg=27601.63, stdev=2267.26 00:33:37.150 lat (usec): min=10637, max=73829, avg=27655.02, stdev=2268.95 00:33:37.150 clat percentiles (usec): 00:33:37.150 | 1.00th=[23462], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:37.150 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:33:37.150 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.150 | 99.00th=[30540], 99.50th=[40109], 99.90th=[54264], 99.95th=[54264], 00:33:37.150 | 99.99th=[73925] 00:33:37.150 bw ( KiB/s): min= 2144, max= 2304, per=4.16%, avg=2266.95, stdev=60.59, samples=19 00:33:37.150 iops : min= 536, max= 576, avg=566.74, stdev=15.15, samples=19 00:33:37.150 lat (msec) : 20=0.84%, 50=98.87%, 100=0.28% 00:33:37.150 cpu : usr=98.52%, sys=1.12%, ctx=13, majf=0, minf=9 00:33:37.150 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 issued rwts: total=5688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.150 filename0: (groupid=0, jobs=1): err= 0: pid=3715987: Tue Nov 19 17:51:37 2024 00:33:37.150 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:37.150 slat (usec): min=7, max=120, avg=56.40, stdev=24.76 00:33:37.150 clat (usec): min=10652, max=54871, avg=27555.69, stdev=1612.92 00:33:37.150 lat (usec): min=10682, max=54894, avg=27612.10, stdev=1614.90 00:33:37.150 clat percentiles (usec): 00:33:37.150 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.150 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:37.150 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.150 | 99.00th=[28705], 99.50th=[29754], 99.90th=[48497], 99.95th=[48497], 00:33:37.150 | 99.99th=[54789] 00:33:37.150 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.53, stdev=57.55, samples=19 00:33:37.150 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:37.150 lat (msec) : 20=0.56%, 50=99.40%, 100=0.04% 00:33:37.150 cpu : usr=98.56%, sys=1.07%, ctx=8, majf=0, minf=9 00:33:37.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.150 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.150 filename0: (groupid=0, jobs=1): err= 0: pid=3715989: Tue Nov 19 17:51:37 2024 00:33:37.150 read: IOPS=571, BW=2288KiB/s (2342kB/s)(22.4MiB/10016msec) 00:33:37.150 slat (usec): min=6, max=104, avg=41.34, stdev=18.17 00:33:37.150 clat (usec): min=9887, max=34926, avg=27648.27, stdev=1556.25 00:33:37.151 lat (usec): min=9919, max=34976, avg=27689.61, stdev=1556.96 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[18482], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:37.151 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:37.151 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[30016], 99.90th=[34341], 99.95th=[34866], 00:33:37.151 | 99.99th=[34866] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2284.80, stdev=61.11, samples=20 00:33:37.151 iops : min= 544, max= 608, avg=571.20, stdev=15.28, samples=20 00:33:37.151 lat (msec) : 10=0.07%, 20=1.05%, 50=98.88% 00:33:37.151 cpu : usr=98.63%, sys=1.03%, ctx=16, majf=0, minf=9 00:33:37.151 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename0: (groupid=0, jobs=1): err= 0: pid=3715990: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10009msec) 00:33:37.151 slat (nsec): min=7453, max=78355, avg=19165.69, stdev=7160.93 00:33:37.151 clat (usec): min=16073, max=37238, avg=27941.85, stdev=829.72 00:33:37.151 lat (usec): min=16086, max=37259, avg=27961.02, stdev=829.75 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:37.151 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.151 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[30278], 99.90th=[36963], 99.95th=[36963], 00:33:37.151 | 99.99th=[37487] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:33:37.151 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:37.151 lat (msec) : 20=0.28%, 50=99.72% 00:33:37.151 cpu : usr=98.65%, sys=0.98%, ctx=12, majf=0, minf=9 00:33:37.151 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename0: (groupid=0, jobs=1): err= 0: pid=3715991: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:37.151 slat (usec): min=5, max=117, avg=57.75, stdev=25.93 00:33:37.151 clat (usec): min=15849, max=39759, avg=27535.97, stdev=920.65 00:33:37.151 lat (usec): min=15886, max=39777, avg=27593.73, stdev=924.87 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.151 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:33:37.151 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[30016], 99.90th=[39584], 99.95th=[39584], 00:33:37.151 | 99.99th=[39584] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:33:37.151 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:37.151 lat (msec) : 20=0.28%, 50=99.72% 00:33:37.151 cpu : usr=98.68%, sys=0.94%, ctx=24, majf=0, minf=9 00:33:37.151 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename1: (groupid=0, jobs=1): err= 0: pid=3715992: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=578, BW=2314KiB/s (2369kB/s)(22.6MiB/10023msec) 00:33:37.151 slat (usec): min=6, max=118, avg=24.40, stdev=20.38 00:33:37.151 clat (usec): min=2345, max=30502, avg=27474.72, stdev=3030.13 00:33:37.151 lat (usec): min=2361, max=30515, avg=27499.12, stdev=3030.21 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[ 6456], 5.00th=[27132], 10.00th=[27657], 20.00th=[27657], 00:33:37.151 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.151 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[28705], 99.90th=[30278], 99.95th=[30540], 00:33:37.151 | 99.99th=[30540] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2992, per=4.24%, avg=2312.80, stdev=168.17, samples=20 00:33:37.151 iops : min= 544, max= 748, avg=578.20, stdev=42.04, samples=20 00:33:37.151 lat (msec) : 4=0.83%, 10=0.31%, 20=1.10%, 50=97.76% 00:33:37.151 cpu : usr=98.49%, sys=1.15%, ctx=13, majf=0, minf=9 00:33:37.151 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename1: (groupid=0, jobs=1): err= 0: pid=3715993: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.4MiB/10013msec) 00:33:37.151 slat (usec): min=6, max=124, avg=56.53, stdev=26.58 00:33:37.151 clat (usec): min=9833, max=30648, avg=27491.62, stdev=1522.59 00:33:37.151 lat (usec): min=9849, max=30663, avg=27548.15, stdev=1523.95 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[18482], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.151 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27919], 00:33:37.151 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30540], 99.95th=[30540], 00:33:37.151 | 99.99th=[30540] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2284.80, stdev=62.64, samples=20 00:33:37.151 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:37.151 lat (msec) : 10=0.10%, 20=1.01%, 50=98.88% 00:33:37.151 cpu : usr=98.39%, sys=1.23%, ctx=15, majf=0, minf=9 00:33:37.151 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename1: (groupid=0, jobs=1): err= 0: pid=3715994: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:37.151 slat (usec): min=5, max=120, avg=57.10, stdev=27.80 00:33:37.151 clat (usec): min=9758, max=48381, avg=27634.19, stdev=1779.59 00:33:37.151 lat (usec): min=9766, max=48395, avg=27691.29, stdev=1780.01 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[21627], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:37.151 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:37.151 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:37.151 | 99.00th=[30278], 99.50th=[34341], 99.90th=[48497], 99.95th=[48497], 00:33:37.151 | 99.99th=[48497] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.53, stdev=52.11, samples=19 00:33:37.151 iops : min= 544, max= 576, avg=567.63, stdev=13.03, samples=19 00:33:37.151 lat (msec) : 10=0.12%, 20=0.51%, 50=99.37% 00:33:37.151 cpu : usr=98.71%, sys=0.93%, ctx=15, majf=0, minf=9 00:33:37.151 IO depths : 1=1.3%, 2=7.3%, 4=24.7%, 8=55.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename1: (groupid=0, jobs=1): err= 0: pid=3715995: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:33:37.151 slat (usec): min=7, max=120, avg=58.53, stdev=24.58 00:33:37.151 clat (usec): min=18208, max=37166, avg=27567.11, stdev=750.90 00:33:37.151 lat (usec): min=18230, max=37191, avg=27625.63, stdev=753.99 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.151 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:37.151 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[30540], 99.90th=[34866], 99.95th=[34866], 00:33:37.151 | 99.99th=[36963] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:33:37.151 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:33:37.151 lat (msec) : 20=0.12%, 50=99.88% 00:33:37.151 cpu : usr=98.52%, sys=1.11%, ctx=18, majf=0, minf=9 00:33:37.151 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename1: (groupid=0, jobs=1): err= 0: pid=3715996: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10009msec) 00:33:37.151 slat (nsec): min=6880, max=78373, avg=18859.11, stdev=7213.51 00:33:37.151 clat (usec): min=16102, max=37305, avg=27939.96, stdev=859.31 00:33:37.151 lat (usec): min=16116, max=37329, avg=27958.82, stdev=859.50 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:37.151 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.151 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:37.151 | 99.00th=[28967], 99.50th=[30540], 99.90th=[37487], 99.95th=[37487], 00:33:37.151 | 99.99th=[37487] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:33:37.151 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:37.151 lat (msec) : 20=0.28%, 50=99.72% 00:33:37.151 cpu : usr=98.59%, sys=1.04%, ctx=16, majf=0, minf=9 00:33:37.151 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.151 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.151 filename1: (groupid=0, jobs=1): err= 0: pid=3715997: Tue Nov 19 17:51:37 2024 00:33:37.151 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:33:37.151 slat (usec): min=4, max=123, avg=55.85, stdev=26.31 00:33:37.151 clat (usec): min=10632, max=48040, avg=27539.03, stdev=1610.44 00:33:37.151 lat (usec): min=10647, max=48063, avg=27594.88, stdev=1612.99 00:33:37.151 clat percentiles (usec): 00:33:37.151 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.151 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:33:37.151 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.151 | 99.00th=[28705], 99.50th=[30016], 99.90th=[47973], 99.95th=[47973], 00:33:37.151 | 99.99th=[47973] 00:33:37.151 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.53, stdev=57.55, samples=19 00:33:37.151 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:37.151 lat (msec) : 20=0.60%, 50=99.40% 00:33:37.151 cpu : usr=98.55%, sys=1.08%, ctx=20, majf=0, minf=9 00:33:37.151 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename1: (groupid=0, jobs=1): err= 0: pid=3715998: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.4MiB/10013msec) 00:33:37.152 slat (usec): min=4, max=151, avg=60.55, stdev=23.85 00:33:37.152 clat (usec): min=8538, max=30469, avg=27447.24, stdev=1515.12 00:33:37.152 lat (usec): min=8545, max=30520, avg=27507.79, stdev=1519.40 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[18220], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.152 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:37.152 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.152 | 99.00th=[28443], 99.50th=[28967], 99.90th=[30016], 99.95th=[30278], 00:33:37.152 | 99.99th=[30540] 00:33:37.152 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2284.80, stdev=62.64, samples=20 00:33:37.152 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:37.152 lat (msec) : 10=0.16%, 20=0.96%, 50=98.88% 00:33:37.152 cpu : usr=98.52%, sys=1.10%, ctx=10, majf=0, minf=9 00:33:37.152 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename1: (groupid=0, jobs=1): err= 0: pid=3716000: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10002msec) 00:33:37.152 slat (usec): min=6, max=120, avg=56.62, stdev=25.94 00:33:37.152 clat (usec): min=15895, max=57525, avg=27638.48, stdev=1658.62 00:33:37.152 lat (usec): min=15925, max=57541, avg=27695.10, stdev=1657.94 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.152 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:37.152 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.152 | 99.00th=[30540], 99.50th=[31327], 99.90th=[54264], 99.95th=[57410], 00:33:37.152 | 99.99th=[57410] 00:33:37.152 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2263.79, stdev=73.91, samples=19 00:33:37.152 iops : min= 513, max= 576, avg=565.95, stdev=18.48, samples=19 00:33:37.152 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:33:37.152 cpu : usr=98.65%, sys=0.98%, ctx=17, majf=0, minf=9 00:33:37.152 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename2: (groupid=0, jobs=1): err= 0: pid=3716001: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10009msec) 00:33:37.152 slat (nsec): min=8587, max=92999, avg=45753.58, stdev=14788.18 00:33:37.152 clat (usec): min=16496, max=34594, avg=27718.19, stdev=762.75 00:33:37.152 lat (usec): min=16529, max=34631, avg=27763.94, stdev=762.25 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:37.152 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:37.152 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:37.152 | 99.00th=[30016], 99.50th=[31065], 99.90th=[33424], 99.95th=[34341], 00:33:37.152 | 99.99th=[34341] 00:33:37.152 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:33:37.152 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:33:37.152 lat (msec) : 20=0.28%, 50=99.72% 00:33:37.152 cpu : usr=97.78%, sys=1.46%, ctx=86, majf=0, minf=9 00:33:37.152 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename2: (groupid=0, jobs=1): err= 0: pid=3716002: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10004msec) 00:33:37.152 slat (nsec): min=5995, max=29656, avg=11340.40, stdev=3403.48 00:33:37.152 clat (usec): min=14128, max=47746, avg=28050.10, stdev=911.83 00:33:37.152 lat (usec): min=14141, max=47753, avg=28061.44, stdev=911.62 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:37.152 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.152 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:37.152 | 99.00th=[29230], 99.50th=[30016], 99.90th=[38011], 99.95th=[38011], 00:33:37.152 | 99.99th=[47973] 00:33:37.152 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:33:37.152 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:37.152 lat (msec) : 20=0.14%, 50=99.86% 00:33:37.152 cpu : usr=98.48%, sys=1.14%, ctx=13, majf=0, minf=9 00:33:37.152 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename2: (groupid=0, jobs=1): err= 0: pid=3716003: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:33:37.152 slat (nsec): min=6736, max=78343, avg=18356.35, stdev=6728.81 00:33:37.152 clat (usec): min=16677, max=34329, avg=27951.47, stdev=888.96 00:33:37.152 lat (usec): min=16687, max=34358, avg=27969.83, stdev=888.72 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[24773], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:37.152 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.152 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:37.152 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32900], 99.95th=[32900], 00:33:37.152 | 99.99th=[34341] 00:33:37.152 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:33:37.152 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:33:37.152 lat (msec) : 20=0.28%, 50=99.72% 00:33:37.152 cpu : usr=98.74%, sys=0.89%, ctx=14, majf=0, minf=9 00:33:37.152 IO depths : 1=5.4%, 2=11.5%, 4=24.2%, 8=51.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename2: (groupid=0, jobs=1): err= 0: pid=3716004: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:37.152 slat (usec): min=4, max=122, avg=48.82, stdev=28.97 00:33:37.152 clat (usec): min=10635, max=45508, avg=27608.19, stdev=1507.85 00:33:37.152 lat (usec): min=10652, max=45524, avg=27657.01, stdev=1508.29 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:37.152 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:37.152 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.152 | 99.00th=[28967], 99.50th=[31327], 99.90th=[45351], 99.95th=[45351], 00:33:37.152 | 99.99th=[45351] 00:33:37.152 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:33:37.152 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:37.152 lat (msec) : 20=0.56%, 50=99.44% 00:33:37.152 cpu : usr=98.83%, sys=0.80%, ctx=12, majf=0, minf=9 00:33:37.152 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename2: (groupid=0, jobs=1): err= 0: pid=3716005: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=578, BW=2315KiB/s (2371kB/s)(22.7MiB/10056msec) 00:33:37.152 slat (nsec): min=6725, max=48754, avg=11225.77, stdev=4287.52 00:33:37.152 clat (usec): min=2453, max=55812, avg=27434.42, stdev=3339.50 00:33:37.152 lat (usec): min=2470, max=55824, avg=27445.64, stdev=3338.57 00:33:37.152 clat percentiles (usec): 00:33:37.152 | 1.00th=[ 9241], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:37.152 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:37.152 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:37.152 | 99.00th=[28967], 99.50th=[30540], 99.90th=[40109], 99.95th=[40109], 00:33:37.152 | 99.99th=[55837] 00:33:37.152 bw ( KiB/s): min= 2176, max= 2944, per=4.26%, avg=2321.60, stdev=155.90, samples=20 00:33:37.152 iops : min= 544, max= 736, avg=580.40, stdev=38.97, samples=20 00:33:37.152 lat (msec) : 4=0.82%, 10=0.43%, 20=2.11%, 50=96.60%, 100=0.03% 00:33:37.152 cpu : usr=98.63%, sys=0.99%, ctx=13, majf=0, minf=9 00:33:37.152 IO depths : 1=5.7%, 2=11.8%, 4=24.5%, 8=51.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:37.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.152 issued rwts: total=5820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.152 filename2: (groupid=0, jobs=1): err= 0: pid=3716006: Tue Nov 19 17:51:37 2024 00:33:37.152 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.4MiB/10013msec) 00:33:37.152 slat (usec): min=6, max=117, avg=50.66, stdev=28.34 00:33:37.153 clat (usec): min=10139, max=30445, avg=27566.57, stdev=1516.79 00:33:37.153 lat (usec): min=10153, max=30498, avg=27617.22, stdev=1516.72 00:33:37.153 clat percentiles (usec): 00:33:37.153 | 1.00th=[18482], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:37.153 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:37.153 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:37.153 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30016], 99.95th=[30278], 00:33:37.153 | 99.99th=[30540] 00:33:37.153 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2284.80, stdev=62.64, samples=20 00:33:37.153 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:37.153 lat (msec) : 20=1.12%, 50=98.88% 00:33:37.153 cpu : usr=98.74%, sys=0.89%, ctx=11, majf=0, minf=9 00:33:37.153 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.153 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.153 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.153 filename2: (groupid=0, jobs=1): err= 0: pid=3716007: Tue Nov 19 17:51:37 2024 00:33:37.153 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10002msec) 00:33:37.153 slat (usec): min=6, max=124, avg=56.41, stdev=26.40 00:33:37.153 clat (usec): min=10547, max=62894, avg=27505.96, stdev=1863.82 00:33:37.153 lat (usec): min=10570, max=62922, avg=27562.38, stdev=1867.50 00:33:37.153 clat percentiles (usec): 00:33:37.153 | 1.00th=[20841], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.153 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:33:37.153 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.153 | 99.00th=[28705], 99.50th=[31327], 99.90th=[46924], 99.95th=[47449], 00:33:37.153 | 99.99th=[62653] 00:33:37.153 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.84, stdev=54.51, samples=19 00:33:37.153 iops : min= 544, max= 576, avg=568.21, stdev=13.63, samples=19 00:33:37.153 lat (msec) : 20=0.84%, 50=99.12%, 100=0.04% 00:33:37.153 cpu : usr=98.63%, sys=1.00%, ctx=14, majf=0, minf=9 00:33:37.153 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.153 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.153 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.153 filename2: (groupid=0, jobs=1): err= 0: pid=3716008: Tue Nov 19 17:51:37 2024 00:33:37.153 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:33:37.153 slat (usec): min=7, max=119, avg=58.95, stdev=24.68 00:33:37.153 clat (usec): min=17890, max=30519, avg=27532.28, stdev=609.32 00:33:37.153 lat (usec): min=17959, max=30566, avg=27591.23, stdev=614.08 00:33:37.153 clat percentiles (usec): 00:33:37.153 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:33:37.153 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:33:37.153 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:37.153 | 99.00th=[28443], 99.50th=[28705], 99.90th=[30016], 99.95th=[30278], 00:33:37.153 | 99.99th=[30540] 00:33:37.153 bw ( KiB/s): min= 2176, max= 2304, per=4.18%, avg=2277.05, stdev=53.61, samples=19 00:33:37.153 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:37.153 lat (msec) : 20=0.28%, 50=99.72% 00:33:37.153 cpu : usr=98.59%, sys=1.02%, ctx=30, majf=0, minf=9 00:33:37.153 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.153 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.153 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:37.153 00:33:37.153 Run status group 0 (all jobs): 00:33:37.153 READ: bw=53.2MiB/s (55.8MB/s), 2272KiB/s-2315KiB/s (2326kB/s-2371kB/s), io=535MiB (561MB), run=10002-10056msec 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 bdev_null0 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.153 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.154 [2024-11-19 17:51:38.224507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.154 bdev_null1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.154 { 00:33:37.154 "params": { 00:33:37.154 "name": "Nvme$subsystem", 00:33:37.154 "trtype": "$TEST_TRANSPORT", 00:33:37.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.154 "adrfam": "ipv4", 00:33:37.154 "trsvcid": "$NVMF_PORT", 00:33:37.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.154 "hdgst": ${hdgst:-false}, 00:33:37.154 "ddgst": ${ddgst:-false} 00:33:37.154 }, 00:33:37.154 "method": "bdev_nvme_attach_controller" 00:33:37.154 } 00:33:37.154 EOF 00:33:37.154 )") 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.154 { 00:33:37.154 "params": { 00:33:37.154 "name": "Nvme$subsystem", 00:33:37.154 "trtype": "$TEST_TRANSPORT", 00:33:37.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.154 "adrfam": "ipv4", 00:33:37.154 "trsvcid": "$NVMF_PORT", 00:33:37.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.154 "hdgst": ${hdgst:-false}, 00:33:37.154 "ddgst": ${ddgst:-false} 00:33:37.154 }, 00:33:37.154 "method": "bdev_nvme_attach_controller" 00:33:37.154 } 00:33:37.154 EOF 00:33:37.154 )") 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:37.154 "params": { 00:33:37.154 "name": "Nvme0", 00:33:37.154 "trtype": "tcp", 00:33:37.154 "traddr": "10.0.0.2", 00:33:37.154 "adrfam": "ipv4", 00:33:37.154 "trsvcid": "4420", 00:33:37.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.154 "hdgst": false, 00:33:37.154 "ddgst": false 00:33:37.154 }, 00:33:37.154 "method": "bdev_nvme_attach_controller" 00:33:37.154 },{ 00:33:37.154 "params": { 00:33:37.154 "name": "Nvme1", 00:33:37.154 "trtype": "tcp", 00:33:37.154 "traddr": "10.0.0.2", 00:33:37.154 "adrfam": "ipv4", 00:33:37.154 "trsvcid": "4420", 00:33:37.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.154 "hdgst": false, 00:33:37.154 "ddgst": false 00:33:37.154 }, 00:33:37.154 "method": "bdev_nvme_attach_controller" 00:33:37.154 }' 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:37.154 17:51:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.154 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:37.154 ... 00:33:37.154 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:37.154 ... 00:33:37.154 fio-3.35 00:33:37.154 Starting 4 threads 00:33:42.424 00:33:42.424 filename0: (groupid=0, jobs=1): err= 0: pid=3717955: Tue Nov 19 17:51:44 2024 00:33:42.424 read: IOPS=2779, BW=21.7MiB/s (22.8MB/s)(109MiB/5004msec) 00:33:42.424 slat (nsec): min=6123, max=49283, avg=8686.02, stdev=2973.76 00:33:42.424 clat (usec): min=893, max=5264, avg=2850.95, stdev=387.04 00:33:42.424 lat (usec): min=927, max=5273, avg=2859.64, stdev=386.76 00:33:42.424 clat percentiles (usec): 00:33:42.424 | 1.00th=[ 1811], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:33:42.424 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 2999], 00:33:42.424 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3425], 00:33:42.424 | 99.00th=[ 3851], 99.50th=[ 4080], 99.90th=[ 4686], 99.95th=[ 5211], 00:33:42.424 | 99.99th=[ 5276] 00:33:42.424 bw ( KiB/s): min=21072, max=24240, per=26.67%, avg=22246.40, stdev=981.72, samples=10 00:33:42.424 iops : min= 2634, max= 3030, avg=2780.80, stdev=122.71, samples=10 00:33:42.424 lat (usec) : 1000=0.09% 00:33:42.424 lat (msec) : 2=1.53%, 4=97.68%, 10=0.69% 00:33:42.424 cpu : usr=95.58%, sys=4.12%, ctx=6, majf=0, minf=9 00:33:42.424 IO depths : 1=0.2%, 2=6.5%, 4=65.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.424 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.424 issued rwts: total=13909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:42.424 filename0: (groupid=0, jobs=1): err= 0: pid=3717956: Tue Nov 19 17:51:44 2024 00:33:42.424 read: IOPS=2562, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:33:42.424 slat (nsec): min=6122, max=53021, avg=8728.97, stdev=3051.72 00:33:42.424 clat (usec): min=675, max=5656, avg=3095.90, stdev=451.04 00:33:42.424 lat (usec): min=685, max=5668, avg=3104.63, stdev=450.85 00:33:42.424 clat percentiles (usec): 00:33:42.424 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2802], 00:33:42.424 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:33:42.424 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3654], 95.00th=[ 3949], 00:33:42.424 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5604], 00:33:42.424 | 99.99th=[ 5669] 00:33:42.424 bw ( KiB/s): min=19248, max=21296, per=24.50%, avg=20435.56, stdev=563.92, samples=9 00:33:42.424 iops : min= 2406, max= 2662, avg=2554.44, stdev=70.49, samples=9 00:33:42.424 lat (usec) : 750=0.01% 00:33:42.424 lat (msec) : 2=0.25%, 4=95.11%, 10=4.63% 00:33:42.424 cpu : usr=95.88%, sys=3.82%, ctx=7, majf=0, minf=9 00:33:42.424 IO depths : 1=0.1%, 2=2.7%, 4=68.5%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.424 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.424 issued rwts: total=12820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=3717957: Tue Nov 19 17:51:44 2024 00:33:42.424 read: IOPS=2573, BW=20.1MiB/s (21.1MB/s)(101MiB/5004msec) 00:33:42.424 slat (nsec): min=6123, max=47151, avg=8356.66, stdev=2819.71 00:33:42.424 clat (usec): min=1024, max=5930, avg=3083.90, stdev=423.09 00:33:42.424 lat (usec): min=1031, max=5940, avg=3092.26, stdev=422.95 00:33:42.424 clat percentiles (usec): 00:33:42.424 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2835], 00:33:42.424 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:33:42.424 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3556], 95.00th=[ 3818], 00:33:42.424 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[ 5669], 00:33:42.424 | 99.99th=[ 5932] 00:33:42.424 bw ( KiB/s): min=19360, max=21840, per=24.69%, avg=20593.60, stdev=760.61, samples=10 00:33:42.424 iops : min= 2420, max= 2730, avg=2574.20, stdev=95.08, samples=10 00:33:42.424 lat (msec) : 2=0.35%, 4=96.17%, 10=3.48% 00:33:42.424 cpu : usr=95.66%, sys=4.02%, ctx=7, majf=0, minf=9 00:33:42.424 IO depths : 1=0.2%, 2=1.9%, 4=70.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.424 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.424 issued rwts: total=12879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=3717959: Tue Nov 19 17:51:44 2024 00:33:42.425 read: IOPS=2509, BW=19.6MiB/s (20.6MB/s)(98.1MiB/5003msec) 00:33:42.425 slat (nsec): min=6138, max=53453, avg=8314.02, stdev=2822.08 00:33:42.425 clat (usec): min=782, max=5753, avg=3162.32, stdev=428.75 00:33:42.425 lat (usec): min=793, max=5761, avg=3170.64, stdev=428.62 00:33:42.425 clat percentiles (usec): 00:33:42.425 | 1.00th=[ 2245], 5.00th=[ 2606], 10.00th=[ 2802], 20.00th=[ 2999], 00:33:42.425 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:33:42.425 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3687], 95.00th=[ 3982], 00:33:42.425 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5604], 00:33:42.425 | 99.99th=[ 5669] 00:33:42.425 bw ( KiB/s): min=19312, max=20640, per=24.08%, avg=20082.50, stdev=511.95, samples=10 00:33:42.425 iops : min= 2414, max= 2580, avg=2510.30, stdev=63.98, samples=10 00:33:42.425 lat (usec) : 1000=0.01% 00:33:42.425 lat (msec) : 2=0.23%, 4=94.85%, 10=4.91% 00:33:42.425 cpu : usr=95.90%, sys=3.80%, ctx=9, majf=0, minf=9 00:33:42.425 IO depths : 1=0.1%, 2=1.8%, 4=71.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.425 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.425 issued rwts: total=12557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:42.425 00:33:42.425 Run status group 0 (all jobs): 00:33:42.425 READ: bw=81.4MiB/s (85.4MB/s), 19.6MiB/s-21.7MiB/s (20.6MB/s-22.8MB/s), io=408MiB (427MB), run=5002-5004msec 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 00:33:42.684 real 0m24.292s 00:33:42.684 user 4m51.726s 00:33:42.684 sys 0m5.157s 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 ************************************ 00:33:42.684 END TEST fio_dif_rand_params 00:33:42.684 ************************************ 00:33:42.684 17:51:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:42.684 17:51:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:42.684 17:51:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 ************************************ 00:33:42.684 START TEST fio_dif_digest 00:33:42.684 ************************************ 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 bdev_null0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.684 [2024-11-19 17:51:44.843470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.684 { 00:33:42.684 "params": { 00:33:42.684 "name": "Nvme$subsystem", 00:33:42.684 "trtype": "$TEST_TRANSPORT", 00:33:42.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.684 "adrfam": "ipv4", 00:33:42.684 "trsvcid": "$NVMF_PORT", 00:33:42.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.684 "hdgst": ${hdgst:-false}, 00:33:42.684 "ddgst": ${ddgst:-false} 00:33:42.684 }, 00:33:42.684 "method": "bdev_nvme_attach_controller" 00:33:42.684 } 00:33:42.684 EOF 00:33:42.684 )") 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.684 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:42.685 "params": { 00:33:42.685 "name": "Nvme0", 00:33:42.685 "trtype": "tcp", 00:33:42.685 "traddr": "10.0.0.2", 00:33:42.685 "adrfam": "ipv4", 00:33:42.685 "trsvcid": "4420", 00:33:42.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.685 "hdgst": true, 00:33:42.685 "ddgst": true 00:33:42.685 }, 00:33:42.685 "method": "bdev_nvme_attach_controller" 00:33:42.685 }' 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.685 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:42.976 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:42.976 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:42.976 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:42.976 17:51:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.235 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:43.235 ... 00:33:43.235 fio-3.35 00:33:43.235 Starting 3 threads 00:33:55.442 00:33:55.442 filename0: (groupid=0, jobs=1): err= 0: pid=3719086: Tue Nov 19 17:51:55 2024 00:33:55.442 read: IOPS=306, BW=38.3MiB/s (40.1MB/s)(384MiB/10044msec) 00:33:55.442 slat (nsec): min=6286, max=52191, avg=22490.33, stdev=6270.16 00:33:55.442 clat (usec): min=7076, max=50610, avg=9763.61, stdev=1208.68 00:33:55.442 lat (usec): min=7084, max=50634, avg=9786.10, stdev=1208.81 00:33:55.442 clat percentiles (usec): 00:33:55.442 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:33:55.442 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:33:55.442 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:33:55.442 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12911], 99.95th=[45876], 00:33:55.442 | 99.99th=[50594] 00:33:55.442 bw ( KiB/s): min=37888, max=41472, per=35.84%, avg=39321.60, stdev=926.38, samples=20 00:33:55.442 iops : min= 296, max= 324, avg=307.20, stdev= 7.24, samples=20 00:33:55.442 lat (msec) : 10=63.92%, 20=36.01%, 50=0.03%, 100=0.03% 00:33:55.442 cpu : usr=96.23%, sys=3.43%, ctx=26, majf=0, minf=61 00:33:55.442 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.442 issued rwts: total=3074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.442 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:55.442 filename0: (groupid=0, jobs=1): err= 0: pid=3719087: Tue Nov 19 17:51:55 2024 00:33:55.442 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10047msec) 00:33:55.442 slat (nsec): min=6367, max=50615, avg=17221.91, stdev=6819.59 00:33:55.442 clat (usec): min=6830, max=47758, avg=10676.18, stdev=1247.80 00:33:55.442 lat (usec): min=6850, max=47770, avg=10693.40, stdev=1247.89 00:33:55.442 clat percentiles (usec): 00:33:55.442 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:33:55.442 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:33:55.442 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:33:55.442 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14091], 99.95th=[46924], 00:33:55.442 | 99.99th=[47973] 00:33:55.442 bw ( KiB/s): min=34304, max=36864, per=32.81%, avg=35993.60, stdev=744.74, samples=20 00:33:55.442 iops : min= 268, max= 288, avg=281.20, stdev= 5.82, samples=20 00:33:55.442 lat (msec) : 10=20.82%, 20=79.10%, 50=0.07% 00:33:55.442 cpu : usr=96.91%, sys=2.78%, ctx=16, majf=0, minf=22 00:33:55.442 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.442 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:55.443 filename0: (groupid=0, jobs=1): err= 0: pid=3719088: Tue Nov 19 17:51:55 2024 00:33:55.443 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(341MiB/10047msec) 00:33:55.443 slat (nsec): min=6390, max=76392, avg=16678.80, stdev=6843.28 00:33:55.443 clat (usec): min=6738, max=48027, avg=11031.04, stdev=1258.96 00:33:55.443 lat (usec): min=6751, max=48038, avg=11047.72, stdev=1258.87 00:33:55.443 clat percentiles (usec): 00:33:55.443 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:33:55.443 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:33:55.443 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:33:55.443 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14615], 99.95th=[46924], 00:33:55.443 | 99.99th=[47973] 00:33:55.443 bw ( KiB/s): min=33536, max=36096, per=31.74%, avg=34828.80, stdev=682.26, samples=20 00:33:55.443 iops : min= 262, max= 282, avg=272.10, stdev= 5.33, samples=20 00:33:55.443 lat (msec) : 10=7.93%, 20=92.00%, 50=0.07% 00:33:55.443 cpu : usr=96.99%, sys=2.71%, ctx=17, majf=0, minf=121 00:33:55.443 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.443 issued rwts: total=2724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:55.443 00:33:55.443 Run status group 0 (all jobs): 00:33:55.443 READ: bw=107MiB/s (112MB/s), 33.9MiB/s-38.3MiB/s (35.5MB/s-40.1MB/s), io=1077MiB (1129MB), run=10044-10047msec 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.443 00:33:55.443 real 0m11.286s 00:33:55.443 user 0m36.083s 00:33:55.443 sys 0m1.179s 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.443 17:51:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.443 ************************************ 00:33:55.443 END TEST fio_dif_digest 00:33:55.443 ************************************ 00:33:55.443 17:51:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:55.443 17:51:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.443 rmmod nvme_tcp 00:33:55.443 rmmod nvme_fabrics 00:33:55.443 rmmod nvme_keyring 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3709987 ']' 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3709987 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3709987 ']' 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3709987 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709987 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709987' 00:33:55.443 killing process with pid 3709987 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3709987 00:33:55.443 17:51:56 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3709987 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:55.443 17:51:56 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:57.351 Waiting for block devices as requested 00:33:57.351 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:57.351 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.351 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.351 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.351 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.351 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.610 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.610 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.610 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.869 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.869 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.869 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:58.127 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:58.127 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:58.127 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:58.127 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:58.386 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:58.386 17:52:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.386 17:52:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:58.386 17:52:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.920 17:52:02 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:00.920 00:34:00.920 real 1m14.415s 00:34:00.920 user 7m10.678s 00:34:00.920 sys 0m20.224s 00:34:00.920 17:52:02 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.920 17:52:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:00.920 ************************************ 00:34:00.920 END TEST nvmf_dif 00:34:00.920 ************************************ 00:34:00.920 17:52:02 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:00.920 17:52:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:00.920 17:52:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:00.920 17:52:02 -- common/autotest_common.sh@10 -- # set +x 00:34:00.920 ************************************ 00:34:00.920 START TEST nvmf_abort_qd_sizes 00:34:00.920 ************************************ 00:34:00.920 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:00.920 * Looking for test storage... 00:34:00.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:00.920 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:00.920 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:00.920 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:00.920 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.921 --rc genhtml_branch_coverage=1 00:34:00.921 --rc genhtml_function_coverage=1 00:34:00.921 --rc genhtml_legend=1 00:34:00.921 --rc geninfo_all_blocks=1 00:34:00.921 --rc geninfo_unexecuted_blocks=1 00:34:00.921 00:34:00.921 ' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.921 --rc genhtml_branch_coverage=1 00:34:00.921 --rc genhtml_function_coverage=1 00:34:00.921 --rc genhtml_legend=1 00:34:00.921 --rc geninfo_all_blocks=1 00:34:00.921 --rc geninfo_unexecuted_blocks=1 00:34:00.921 00:34:00.921 ' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.921 --rc genhtml_branch_coverage=1 00:34:00.921 --rc genhtml_function_coverage=1 00:34:00.921 --rc genhtml_legend=1 00:34:00.921 --rc geninfo_all_blocks=1 00:34:00.921 --rc geninfo_unexecuted_blocks=1 00:34:00.921 00:34:00.921 ' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.921 --rc genhtml_branch_coverage=1 00:34:00.921 --rc genhtml_function_coverage=1 00:34:00.921 --rc genhtml_legend=1 00:34:00.921 --rc geninfo_all_blocks=1 00:34:00.921 --rc geninfo_unexecuted_blocks=1 00:34:00.921 00:34:00.921 ' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:00.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:00.921 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:00.922 17:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:07.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:07.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:07.493 Found net devices under 0000:86:00.0: cvl_0_0 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:07.493 Found net devices under 0000:86:00.1: cvl_0_1 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.493 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:34:07.494 00:34:07.494 --- 10.0.0.2 ping statistics --- 00:34:07.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.494 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:34:07.494 00:34:07.494 --- 10.0.0.1 ping statistics --- 00:34:07.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.494 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:07.494 17:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:09.400 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:09.400 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:09.659 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:09.659 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:10.227 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3727036 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3727036 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3727036 ']' 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.486 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:10.486 [2024-11-19 17:52:12.615373] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:34:10.486 [2024-11-19 17:52:12.615417] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.486 [2024-11-19 17:52:12.695358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.745 [2024-11-19 17:52:12.739504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.745 [2024-11-19 17:52:12.739542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.745 [2024-11-19 17:52:12.739549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.745 [2024-11-19 17:52:12.739556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.745 [2024-11-19 17:52:12.739561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.745 [2024-11-19 17:52:12.741136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.745 [2024-11-19 17:52:12.741262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.745 [2024-11-19 17:52:12.741365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.745 [2024-11-19 17:52:12.741366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:10.745 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.746 17:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:10.746 ************************************ 00:34:10.746 START TEST spdk_target_abort 00:34:10.746 ************************************ 00:34:10.746 17:52:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:10.746 17:52:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:10.746 17:52:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:10.746 17:52:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.746 17:52:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 spdk_targetn1 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 [2024-11-19 17:52:15.762121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 [2024-11-19 17:52:15.809315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:14.035 17:52:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:17.322 Initializing NVMe Controllers 00:34:17.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:17.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:17.322 Initialization complete. Launching workers. 00:34:17.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15954, failed: 0 00:34:17.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1391, failed to submit 14563 00:34:17.322 success 716, unsuccessful 675, failed 0 00:34:17.322 17:52:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:17.322 17:52:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:20.608 Initializing NVMe Controllers 00:34:20.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:20.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:20.608 Initialization complete. Launching workers. 00:34:20.608 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8518, failed: 0 00:34:20.608 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1260, failed to submit 7258 00:34:20.608 success 319, unsuccessful 941, failed 0 00:34:20.608 17:52:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:20.608 17:52:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:23.894 Initializing NVMe Controllers 00:34:23.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:23.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:23.894 Initialization complete. Launching workers. 00:34:23.894 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37833, failed: 0 00:34:23.894 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2872, failed to submit 34961 00:34:23.894 success 613, unsuccessful 2259, failed 0 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.894 17:52:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.830 17:52:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.830 17:52:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3727036 00:34:24.830 17:52:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3727036 ']' 00:34:24.831 17:52:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3727036 00:34:24.831 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:24.831 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.831 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727036 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727036' 00:34:25.090 killing process with pid 3727036 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3727036 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3727036 00:34:25.090 00:34:25.090 real 0m14.292s 00:34:25.090 user 0m54.403s 00:34:25.090 sys 0m2.681s 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:25.090 ************************************ 00:34:25.090 END TEST spdk_target_abort 00:34:25.090 ************************************ 00:34:25.090 17:52:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:25.090 17:52:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:25.090 17:52:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.090 17:52:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.090 ************************************ 00:34:25.090 START TEST kernel_target_abort 00:34:25.090 ************************************ 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:25.090 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:25.350 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:25.350 17:52:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:27.886 Waiting for block devices as requested 00:34:27.886 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:28.145 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:28.145 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:28.145 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:28.404 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:28.404 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:28.404 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:28.404 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:28.664 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:28.664 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:28.664 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:28.924 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:28.924 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:28.924 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:29.183 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:29.183 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:29.183 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:29.183 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:29.442 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:29.442 No valid GPT data, bailing 00:34:29.442 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:29.442 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:29.442 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:29.443 00:34:29.443 Discovery Log Number of Records 2, Generation counter 2 00:34:29.443 =====Discovery Log Entry 0====== 00:34:29.443 trtype: tcp 00:34:29.443 adrfam: ipv4 00:34:29.443 subtype: current discovery subsystem 00:34:29.443 treq: not specified, sq flow control disable supported 00:34:29.443 portid: 1 00:34:29.443 trsvcid: 4420 00:34:29.443 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:29.443 traddr: 10.0.0.1 00:34:29.443 eflags: none 00:34:29.443 sectype: none 00:34:29.443 =====Discovery Log Entry 1====== 00:34:29.443 trtype: tcp 00:34:29.443 adrfam: ipv4 00:34:29.443 subtype: nvme subsystem 00:34:29.443 treq: not specified, sq flow control disable supported 00:34:29.443 portid: 1 00:34:29.443 trsvcid: 4420 00:34:29.443 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:29.443 traddr: 10.0.0.1 00:34:29.443 eflags: none 00:34:29.443 sectype: none 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:29.443 17:52:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:32.800 Initializing NVMe Controllers 00:34:32.800 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:32.800 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:32.800 Initialization complete. Launching workers. 00:34:32.800 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92925, failed: 0 00:34:32.800 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92925, failed to submit 0 00:34:32.800 success 0, unsuccessful 92925, failed 0 00:34:32.800 17:52:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:32.800 17:52:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.111 Initializing NVMe Controllers 00:34:36.111 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.111 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.111 Initialization complete. Launching workers. 00:34:36.111 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142054, failed: 0 00:34:36.111 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35614, failed to submit 106440 00:34:36.111 success 0, unsuccessful 35614, failed 0 00:34:36.111 17:52:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.111 17:52:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.399 Initializing NVMe Controllers 00:34:39.399 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.399 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.399 Initialization complete. Launching workers. 00:34:39.399 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136122, failed: 0 00:34:39.399 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34074, failed to submit 102048 00:34:39.399 success 0, unsuccessful 34074, failed 0 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:39.399 17:52:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:41.936 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:41.936 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:42.872 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:42.872 00:34:42.872 real 0m17.578s 00:34:42.872 user 0m9.230s 00:34:42.872 sys 0m4.989s 00:34:42.872 17:52:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:42.872 17:52:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:42.872 ************************************ 00:34:42.872 END TEST kernel_target_abort 00:34:42.872 ************************************ 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.872 rmmod nvme_tcp 00:34:42.872 rmmod nvme_fabrics 00:34:42.872 rmmod nvme_keyring 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3727036 ']' 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3727036 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3727036 ']' 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3727036 00:34:42.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3727036) - No such process 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3727036 is not found' 00:34:42.872 Process with pid 3727036 is not found 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:42.872 17:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:46.163 Waiting for block devices as requested 00:34:46.163 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:46.163 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:46.163 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.423 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:46.423 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:46.423 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:46.682 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:46.682 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:46.682 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:46.941 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:46.941 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.941 17:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.478 17:52:51 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:49.478 00:34:49.478 real 0m48.504s 00:34:49.478 user 1m7.936s 00:34:49.478 sys 0m16.473s 00:34:49.478 17:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.478 17:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:49.478 ************************************ 00:34:49.478 END TEST nvmf_abort_qd_sizes 00:34:49.478 ************************************ 00:34:49.478 17:52:51 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:49.478 17:52:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:49.478 17:52:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.478 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:34:49.478 ************************************ 00:34:49.478 START TEST keyring_file 00:34:49.478 ************************************ 00:34:49.478 17:52:51 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:49.478 * Looking for test storage... 00:34:49.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:49.478 17:52:51 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:49.478 17:52:51 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:34:49.478 17:52:51 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:49.478 17:52:51 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:49.478 17:52:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.478 17:52:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.478 17:52:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:49.479 17:52:51 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.479 17:52:51 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.479 --rc genhtml_branch_coverage=1 00:34:49.479 --rc genhtml_function_coverage=1 00:34:49.479 --rc genhtml_legend=1 00:34:49.479 --rc geninfo_all_blocks=1 00:34:49.479 --rc geninfo_unexecuted_blocks=1 00:34:49.479 00:34:49.479 ' 00:34:49.479 17:52:51 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.479 --rc genhtml_branch_coverage=1 00:34:49.479 --rc genhtml_function_coverage=1 00:34:49.479 --rc genhtml_legend=1 00:34:49.479 --rc geninfo_all_blocks=1 00:34:49.479 --rc geninfo_unexecuted_blocks=1 00:34:49.479 00:34:49.479 ' 00:34:49.479 17:52:51 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.479 --rc genhtml_branch_coverage=1 00:34:49.479 --rc genhtml_function_coverage=1 00:34:49.479 --rc genhtml_legend=1 00:34:49.479 --rc geninfo_all_blocks=1 00:34:49.479 --rc geninfo_unexecuted_blocks=1 00:34:49.479 00:34:49.479 ' 00:34:49.479 17:52:51 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.479 --rc genhtml_branch_coverage=1 00:34:49.479 --rc genhtml_function_coverage=1 00:34:49.479 --rc genhtml_legend=1 00:34:49.479 --rc geninfo_all_blocks=1 00:34:49.479 --rc geninfo_unexecuted_blocks=1 00:34:49.479 00:34:49.479 ' 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.479 17:52:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.479 17:52:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.479 17:52:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.479 17:52:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.479 17:52:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:49.479 17:52:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:49.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vBxqI6GhG2 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:49.479 17:52:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vBxqI6GhG2 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vBxqI6GhG2 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vBxqI6GhG2 00:34:49.479 17:52:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:49.479 17:52:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:49.480 17:52:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Bv5jpcyHpD 00:34:49.480 17:52:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:49.480 17:52:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:49.480 17:52:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:49.480 17:52:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:49.480 17:52:51 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:49.480 17:52:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:49.480 17:52:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:49.480 17:52:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Bv5jpcyHpD 00:34:49.480 17:52:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Bv5jpcyHpD 00:34:49.480 17:52:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Bv5jpcyHpD 00:34:49.480 17:52:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=3735888 00:34:49.480 17:52:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3735888 00:34:49.480 17:52:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:49.480 17:52:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3735888 ']' 00:34:49.480 17:52:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.480 17:52:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.480 17:52:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.480 17:52:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.480 17:52:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:49.480 [2024-11-19 17:52:51.602595] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:34:49.480 [2024-11-19 17:52:51.602647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735888 ] 00:34:49.480 [2024-11-19 17:52:51.678856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.739 [2024-11-19 17:52:51.721152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.739 17:52:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.739 17:52:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:49.739 17:52:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:49.739 17:52:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.739 17:52:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:49.739 [2024-11-19 17:52:51.928758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.739 null0 00:34:49.998 [2024-11-19 17:52:51.960815] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:49.998 [2024-11-19 17:52:51.961175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.998 17:52:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:49.998 [2024-11-19 17:52:51.988881] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:49.998 request: 00:34:49.998 { 00:34:49.998 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:49.998 "secure_channel": false, 00:34:49.998 "listen_address": { 00:34:49.998 "trtype": "tcp", 00:34:49.998 "traddr": "127.0.0.1", 00:34:49.998 "trsvcid": "4420" 00:34:49.998 }, 00:34:49.998 "method": "nvmf_subsystem_add_listener", 00:34:49.998 "req_id": 1 00:34:49.998 } 00:34:49.998 Got JSON-RPC error response 00:34:49.998 response: 00:34:49.998 { 00:34:49.998 "code": -32602, 00:34:49.998 "message": "Invalid parameters" 00:34:49.998 } 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.998 17:52:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=3735907 00:34:49.998 17:52:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:49.998 17:52:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3735907 /var/tmp/bperf.sock 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3735907 ']' 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.998 17:52:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:49.998 [2024-11-19 17:52:52.036734] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:34:49.998 [2024-11-19 17:52:52.036775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735907 ] 00:34:49.998 [2024-11-19 17:52:52.112926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.998 [2024-11-19 17:52:52.155272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.257 17:52:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.257 17:52:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:50.257 17:52:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:50.257 17:52:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:50.257 17:52:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Bv5jpcyHpD 00:34:50.257 17:52:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Bv5jpcyHpD 00:34:50.521 17:52:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:50.521 17:52:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:50.521 17:52:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:50.521 17:52:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:50.521 17:52:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:50.781 17:52:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vBxqI6GhG2 == \/\t\m\p\/\t\m\p\.\v\B\x\q\I\6\G\h\G\2 ]] 00:34:50.781 17:52:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:50.781 17:52:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:50.781 17:52:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:50.781 17:52:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:50.781 17:52:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.040 17:52:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Bv5jpcyHpD == \/\t\m\p\/\t\m\p\.\B\v\5\j\p\c\y\H\p\D ]] 00:34:51.040 17:52:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.040 17:52:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:51.040 17:52:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:51.040 17:52:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.299 17:52:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:51.299 17:52:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:51.299 17:52:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:51.558 [2024-11-19 17:52:53.621452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:51.558 nvme0n1 00:34:51.558 17:52:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:51.558 17:52:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:51.558 17:52:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:51.558 17:52:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.558 17:52:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:51.558 17:52:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.817 17:52:53 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:51.817 17:52:53 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:51.817 17:52:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:51.817 17:52:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:51.817 17:52:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:51.817 17:52:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.817 17:52:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:52.076 17:52:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:52.076 17:52:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.076 Running I/O for 1 seconds... 00:34:53.012 18830.00 IOPS, 73.55 MiB/s 00:34:53.012 Latency(us) 00:34:53.012 [2024-11-19T16:52:55.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.012 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:53.012 nvme0n1 : 1.00 18878.03 73.74 0.00 0.00 6768.32 2692.67 11853.47 00:34:53.012 [2024-11-19T16:52:55.235Z] =================================================================================================================== 00:34:53.012 [2024-11-19T16:52:55.235Z] Total : 18878.03 73.74 0.00 0.00 6768.32 2692.67 11853.47 00:34:53.012 { 00:34:53.012 "results": [ 00:34:53.012 { 00:34:53.012 "job": "nvme0n1", 00:34:53.012 "core_mask": "0x2", 00:34:53.012 "workload": "randrw", 00:34:53.012 "percentage": 50, 00:34:53.012 "status": "finished", 00:34:53.012 "queue_depth": 128, 00:34:53.012 "io_size": 4096, 00:34:53.012 "runtime": 1.004289, 00:34:53.012 "iops": 18878.032120236305, 00:34:53.012 "mibps": 73.74231296967307, 00:34:53.012 "io_failed": 0, 00:34:53.012 "io_timeout": 0, 00:34:53.012 "avg_latency_us": 6768.322953742286, 00:34:53.012 "min_latency_us": 2692.6747826086958, 00:34:53.012 "max_latency_us": 11853.467826086957 00:34:53.012 } 00:34:53.012 ], 00:34:53.012 "core_count": 1 00:34:53.012 } 00:34:53.271 17:52:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:53.271 17:52:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:53.271 17:52:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:53.271 17:52:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:53.271 17:52:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:53.271 17:52:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:53.271 17:52:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:53.271 17:52:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.531 17:52:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:53.531 17:52:55 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:53.531 17:52:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:53.531 17:52:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:53.531 17:52:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:53.531 17:52:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.531 17:52:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:53.791 17:52:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:53.791 17:52:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.791 17:52:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:53.791 17:52:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:54.050 [2024-11-19 17:52:56.043725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:54.050 [2024-11-19 17:52:56.044137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e31f0 (107): Transport endpoint is not connected 00:34:54.050 [2024-11-19 17:52:56.045131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e31f0 (9): Bad file descriptor 00:34:54.050 [2024-11-19 17:52:56.046133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:54.050 [2024-11-19 17:52:56.046144] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:54.051 [2024-11-19 17:52:56.046151] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:54.051 [2024-11-19 17:52:56.046160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:54.051 request: 00:34:54.051 { 00:34:54.051 "name": "nvme0", 00:34:54.051 "trtype": "tcp", 00:34:54.051 "traddr": "127.0.0.1", 00:34:54.051 "adrfam": "ipv4", 00:34:54.051 "trsvcid": "4420", 00:34:54.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:54.051 "prchk_reftag": false, 00:34:54.051 "prchk_guard": false, 00:34:54.051 "hdgst": false, 00:34:54.051 "ddgst": false, 00:34:54.051 "psk": "key1", 00:34:54.051 "allow_unrecognized_csi": false, 00:34:54.051 "method": "bdev_nvme_attach_controller", 00:34:54.051 "req_id": 1 00:34:54.051 } 00:34:54.051 Got JSON-RPC error response 00:34:54.051 response: 00:34:54.051 { 00:34:54.051 "code": -5, 00:34:54.051 "message": "Input/output error" 00:34:54.051 } 00:34:54.051 17:52:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:54.051 17:52:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:54.051 17:52:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:54.051 17:52:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:54.051 17:52:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.051 17:52:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:54.051 17:52:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:54.051 17:52:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.310 17:52:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:54.310 17:52:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:54.310 17:52:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:54.568 17:52:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:54.569 17:52:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:54.827 17:52:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:54.827 17:52:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.827 17:52:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:55.087 17:52:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:55.087 17:52:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vBxqI6GhG2 00:34:55.087 17:52:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:55.087 17:52:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:55.087 [2024-11-19 17:52:57.235279] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vBxqI6GhG2': 0100660 00:34:55.087 [2024-11-19 17:52:57.235305] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:55.087 request: 00:34:55.087 { 00:34:55.087 "name": "key0", 00:34:55.087 "path": "/tmp/tmp.vBxqI6GhG2", 00:34:55.087 "method": "keyring_file_add_key", 00:34:55.087 "req_id": 1 00:34:55.087 } 00:34:55.087 Got JSON-RPC error response 00:34:55.087 response: 00:34:55.087 { 00:34:55.087 "code": -1, 00:34:55.087 "message": "Operation not permitted" 00:34:55.087 } 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:55.087 17:52:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:55.087 17:52:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vBxqI6GhG2 00:34:55.087 17:52:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:55.087 17:52:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vBxqI6GhG2 00:34:55.346 17:52:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vBxqI6GhG2 00:34:55.346 17:52:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:55.346 17:52:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:55.346 17:52:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.346 17:52:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.346 17:52:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:55.346 17:52:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.605 17:52:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:55.605 17:52:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:55.605 17:52:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:55.605 17:52:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:55.605 17:52:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:55.605 17:52:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:55.605 17:52:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:55.606 17:52:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:55.606 17:52:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:55.606 17:52:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:55.865 [2024-11-19 17:52:57.832858] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vBxqI6GhG2': No such file or directory 00:34:55.865 [2024-11-19 17:52:57.832884] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:55.865 [2024-11-19 17:52:57.832900] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:55.865 [2024-11-19 17:52:57.832907] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:55.865 [2024-11-19 17:52:57.832914] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:55.865 [2024-11-19 17:52:57.832920] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:55.865 request: 00:34:55.865 { 00:34:55.865 "name": "nvme0", 00:34:55.865 "trtype": "tcp", 00:34:55.865 "traddr": "127.0.0.1", 00:34:55.865 "adrfam": "ipv4", 00:34:55.865 "trsvcid": "4420", 00:34:55.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:55.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:55.865 "prchk_reftag": false, 00:34:55.865 "prchk_guard": false, 00:34:55.865 "hdgst": false, 00:34:55.865 "ddgst": false, 00:34:55.865 "psk": "key0", 00:34:55.865 "allow_unrecognized_csi": false, 00:34:55.865 "method": "bdev_nvme_attach_controller", 00:34:55.865 "req_id": 1 00:34:55.865 } 00:34:55.865 Got JSON-RPC error response 00:34:55.865 response: 00:34:55.865 { 00:34:55.865 "code": -19, 00:34:55.865 "message": "No such device" 00:34:55.865 } 00:34:55.865 17:52:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:55.865 17:52:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:55.865 17:52:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:55.865 17:52:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:55.865 17:52:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:55.865 17:52:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:55.865 17:52:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XmZYW3NKXS 00:34:55.865 17:52:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:55.865 17:52:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:55.865 17:52:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:55.865 17:52:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:55.865 17:52:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:55.865 17:52:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:55.865 17:52:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:56.124 17:52:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XmZYW3NKXS 00:34:56.124 17:52:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XmZYW3NKXS 00:34:56.124 17:52:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.XmZYW3NKXS 00:34:56.124 17:52:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XmZYW3NKXS 00:34:56.124 17:52:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XmZYW3NKXS 00:34:56.124 17:52:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:56.124 17:52:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:56.382 nvme0n1 00:34:56.382 17:52:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:56.382 17:52:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:56.382 17:52:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:56.382 17:52:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:56.382 17:52:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:56.382 17:52:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:56.641 17:52:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:56.641 17:52:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:56.641 17:52:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:56.899 17:52:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:56.899 17:52:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:56.899 17:52:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:56.899 17:52:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:56.899 17:52:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.158 17:52:59 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:57.158 17:52:59 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:57.158 17:52:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:57.158 17:52:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:57.158 17:52:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.158 17:52:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:57.159 17:52:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.159 17:52:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:57.159 17:52:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:57.159 17:52:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:57.419 17:52:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:57.419 17:52:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:57.419 17:52:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.679 17:52:59 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:57.679 17:52:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XmZYW3NKXS 00:34:57.679 17:52:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XmZYW3NKXS 00:34:57.938 17:52:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Bv5jpcyHpD 00:34:57.938 17:52:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Bv5jpcyHpD 00:34:58.197 17:53:00 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.197 17:53:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.197 nvme0n1 00:34:58.455 17:53:00 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:58.456 17:53:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:58.716 17:53:00 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:58.716 "subsystems": [ 00:34:58.716 { 00:34:58.716 "subsystem": "keyring", 00:34:58.716 "config": [ 00:34:58.716 { 00:34:58.716 "method": "keyring_file_add_key", 00:34:58.716 "params": { 00:34:58.716 "name": "key0", 00:34:58.716 "path": "/tmp/tmp.XmZYW3NKXS" 00:34:58.716 } 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "method": "keyring_file_add_key", 00:34:58.716 "params": { 00:34:58.716 "name": "key1", 00:34:58.716 "path": "/tmp/tmp.Bv5jpcyHpD" 00:34:58.716 } 00:34:58.716 } 00:34:58.716 ] 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "subsystem": "iobuf", 00:34:58.716 "config": [ 00:34:58.716 { 00:34:58.716 "method": "iobuf_set_options", 00:34:58.716 "params": { 00:34:58.716 "small_pool_count": 8192, 00:34:58.716 "large_pool_count": 1024, 00:34:58.716 "small_bufsize": 8192, 00:34:58.716 "large_bufsize": 135168, 00:34:58.716 "enable_numa": false 00:34:58.716 } 00:34:58.716 } 00:34:58.716 ] 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "subsystem": "sock", 00:34:58.716 "config": [ 00:34:58.716 { 00:34:58.716 "method": "sock_set_default_impl", 00:34:58.716 "params": { 00:34:58.716 "impl_name": "posix" 00:34:58.716 } 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "method": "sock_impl_set_options", 00:34:58.716 "params": { 00:34:58.716 "impl_name": "ssl", 00:34:58.716 "recv_buf_size": 4096, 00:34:58.716 "send_buf_size": 4096, 00:34:58.716 "enable_recv_pipe": true, 00:34:58.716 "enable_quickack": false, 00:34:58.716 "enable_placement_id": 0, 00:34:58.716 "enable_zerocopy_send_server": true, 00:34:58.716 "enable_zerocopy_send_client": false, 00:34:58.716 "zerocopy_threshold": 0, 00:34:58.716 "tls_version": 0, 00:34:58.716 "enable_ktls": false 00:34:58.716 } 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "method": "sock_impl_set_options", 00:34:58.716 "params": { 00:34:58.716 "impl_name": "posix", 00:34:58.716 "recv_buf_size": 2097152, 00:34:58.716 "send_buf_size": 2097152, 00:34:58.716 "enable_recv_pipe": true, 00:34:58.716 "enable_quickack": false, 00:34:58.716 "enable_placement_id": 0, 00:34:58.716 "enable_zerocopy_send_server": true, 00:34:58.716 "enable_zerocopy_send_client": false, 00:34:58.716 "zerocopy_threshold": 0, 00:34:58.716 "tls_version": 0, 00:34:58.716 "enable_ktls": false 00:34:58.716 } 00:34:58.716 } 00:34:58.716 ] 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "subsystem": "vmd", 00:34:58.716 "config": [] 00:34:58.716 }, 00:34:58.716 { 00:34:58.716 "subsystem": "accel", 00:34:58.716 "config": [ 00:34:58.716 { 00:34:58.716 "method": "accel_set_options", 00:34:58.716 "params": { 00:34:58.716 "small_cache_size": 128, 00:34:58.716 "large_cache_size": 16, 00:34:58.716 "task_count": 2048, 00:34:58.717 "sequence_count": 2048, 00:34:58.717 "buf_count": 2048 00:34:58.717 } 00:34:58.717 } 00:34:58.717 ] 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "subsystem": "bdev", 00:34:58.717 "config": [ 00:34:58.717 { 00:34:58.717 "method": "bdev_set_options", 00:34:58.717 "params": { 00:34:58.717 "bdev_io_pool_size": 65535, 00:34:58.717 "bdev_io_cache_size": 256, 00:34:58.717 "bdev_auto_examine": true, 00:34:58.717 "iobuf_small_cache_size": 128, 00:34:58.717 "iobuf_large_cache_size": 16 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "bdev_raid_set_options", 00:34:58.717 "params": { 00:34:58.717 "process_window_size_kb": 1024, 00:34:58.717 "process_max_bandwidth_mb_sec": 0 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "bdev_iscsi_set_options", 00:34:58.717 "params": { 00:34:58.717 "timeout_sec": 30 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "bdev_nvme_set_options", 00:34:58.717 "params": { 00:34:58.717 "action_on_timeout": "none", 00:34:58.717 "timeout_us": 0, 00:34:58.717 "timeout_admin_us": 0, 00:34:58.717 "keep_alive_timeout_ms": 10000, 00:34:58.717 "arbitration_burst": 0, 00:34:58.717 "low_priority_weight": 0, 00:34:58.717 "medium_priority_weight": 0, 00:34:58.717 "high_priority_weight": 0, 00:34:58.717 "nvme_adminq_poll_period_us": 10000, 00:34:58.717 "nvme_ioq_poll_period_us": 0, 00:34:58.717 "io_queue_requests": 512, 00:34:58.717 "delay_cmd_submit": true, 00:34:58.717 "transport_retry_count": 4, 00:34:58.717 "bdev_retry_count": 3, 00:34:58.717 "transport_ack_timeout": 0, 00:34:58.717 "ctrlr_loss_timeout_sec": 0, 00:34:58.717 "reconnect_delay_sec": 0, 00:34:58.717 "fast_io_fail_timeout_sec": 0, 00:34:58.717 "disable_auto_failback": false, 00:34:58.717 "generate_uuids": false, 00:34:58.717 "transport_tos": 0, 00:34:58.717 "nvme_error_stat": false, 00:34:58.717 "rdma_srq_size": 0, 00:34:58.717 "io_path_stat": false, 00:34:58.717 "allow_accel_sequence": false, 00:34:58.717 "rdma_max_cq_size": 0, 00:34:58.717 "rdma_cm_event_timeout_ms": 0, 00:34:58.717 "dhchap_digests": [ 00:34:58.717 "sha256", 00:34:58.717 "sha384", 00:34:58.717 "sha512" 00:34:58.717 ], 00:34:58.717 "dhchap_dhgroups": [ 00:34:58.717 "null", 00:34:58.717 "ffdhe2048", 00:34:58.717 "ffdhe3072", 00:34:58.717 "ffdhe4096", 00:34:58.717 "ffdhe6144", 00:34:58.717 "ffdhe8192" 00:34:58.717 ] 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "bdev_nvme_attach_controller", 00:34:58.717 "params": { 00:34:58.717 "name": "nvme0", 00:34:58.717 "trtype": "TCP", 00:34:58.717 "adrfam": "IPv4", 00:34:58.717 "traddr": "127.0.0.1", 00:34:58.717 "trsvcid": "4420", 00:34:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.717 "prchk_reftag": false, 00:34:58.717 "prchk_guard": false, 00:34:58.717 "ctrlr_loss_timeout_sec": 0, 00:34:58.717 "reconnect_delay_sec": 0, 00:34:58.717 "fast_io_fail_timeout_sec": 0, 00:34:58.717 "psk": "key0", 00:34:58.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.717 "hdgst": false, 00:34:58.717 "ddgst": false, 00:34:58.717 "multipath": "multipath" 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "bdev_nvme_set_hotplug", 00:34:58.717 "params": { 00:34:58.717 "period_us": 100000, 00:34:58.717 "enable": false 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "bdev_wait_for_examine" 00:34:58.717 } 00:34:58.717 ] 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "subsystem": "nbd", 00:34:58.717 "config": [] 00:34:58.717 } 00:34:58.717 ] 00:34:58.717 }' 00:34:58.717 17:53:00 keyring_file -- keyring/file.sh@115 -- # killprocess 3735907 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3735907 ']' 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3735907 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3735907 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3735907' 00:34:58.717 killing process with pid 3735907 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@973 -- # kill 3735907 00:34:58.717 Received shutdown signal, test time was about 1.000000 seconds 00:34:58.717 00:34:58.717 Latency(us) 00:34:58.717 [2024-11-19T16:53:00.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.717 [2024-11-19T16:53:00.940Z] =================================================================================================================== 00:34:58.717 [2024-11-19T16:53:00.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@978 -- # wait 3735907 00:34:58.717 17:53:00 keyring_file -- keyring/file.sh@118 -- # bperfpid=3737426 00:34:58.717 17:53:00 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3737426 /var/tmp/bperf.sock 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3737426 ']' 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:58.717 17:53:00 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:58.717 17:53:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:58.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:58.717 17:53:00 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:58.717 "subsystems": [ 00:34:58.717 { 00:34:58.717 "subsystem": "keyring", 00:34:58.717 "config": [ 00:34:58.717 { 00:34:58.717 "method": "keyring_file_add_key", 00:34:58.717 "params": { 00:34:58.717 "name": "key0", 00:34:58.717 "path": "/tmp/tmp.XmZYW3NKXS" 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "keyring_file_add_key", 00:34:58.717 "params": { 00:34:58.717 "name": "key1", 00:34:58.717 "path": "/tmp/tmp.Bv5jpcyHpD" 00:34:58.717 } 00:34:58.717 } 00:34:58.717 ] 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "subsystem": "iobuf", 00:34:58.717 "config": [ 00:34:58.717 { 00:34:58.717 "method": "iobuf_set_options", 00:34:58.717 "params": { 00:34:58.717 "small_pool_count": 8192, 00:34:58.717 "large_pool_count": 1024, 00:34:58.717 "small_bufsize": 8192, 00:34:58.717 "large_bufsize": 135168, 00:34:58.717 "enable_numa": false 00:34:58.717 } 00:34:58.717 } 00:34:58.717 ] 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "subsystem": "sock", 00:34:58.717 "config": [ 00:34:58.717 { 00:34:58.717 "method": "sock_set_default_impl", 00:34:58.717 "params": { 00:34:58.717 "impl_name": "posix" 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "sock_impl_set_options", 00:34:58.717 "params": { 00:34:58.717 "impl_name": "ssl", 00:34:58.717 "recv_buf_size": 4096, 00:34:58.717 "send_buf_size": 4096, 00:34:58.717 "enable_recv_pipe": true, 00:34:58.717 "enable_quickack": false, 00:34:58.717 "enable_placement_id": 0, 00:34:58.717 "enable_zerocopy_send_server": true, 00:34:58.717 "enable_zerocopy_send_client": false, 00:34:58.717 "zerocopy_threshold": 0, 00:34:58.717 "tls_version": 0, 00:34:58.717 "enable_ktls": false 00:34:58.717 } 00:34:58.717 }, 00:34:58.717 { 00:34:58.717 "method": "sock_impl_set_options", 00:34:58.717 "params": { 00:34:58.717 "impl_name": "posix", 00:34:58.717 "recv_buf_size": 2097152, 00:34:58.717 "send_buf_size": 2097152, 00:34:58.717 "enable_recv_pipe": true, 00:34:58.717 "enable_quickack": false, 00:34:58.717 "enable_placement_id": 0, 00:34:58.717 "enable_zerocopy_send_server": true, 00:34:58.717 "enable_zerocopy_send_client": false, 00:34:58.717 "zerocopy_threshold": 0, 00:34:58.717 "tls_version": 0, 00:34:58.718 "enable_ktls": false 00:34:58.718 } 00:34:58.718 } 00:34:58.718 ] 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "subsystem": "vmd", 00:34:58.718 "config": [] 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "subsystem": "accel", 00:34:58.718 "config": [ 00:34:58.718 { 00:34:58.718 "method": "accel_set_options", 00:34:58.718 "params": { 00:34:58.718 "small_cache_size": 128, 00:34:58.718 "large_cache_size": 16, 00:34:58.718 "task_count": 2048, 00:34:58.718 "sequence_count": 2048, 00:34:58.718 "buf_count": 2048 00:34:58.718 } 00:34:58.718 } 00:34:58.718 ] 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "subsystem": "bdev", 00:34:58.718 "config": [ 00:34:58.718 { 00:34:58.718 "method": "bdev_set_options", 00:34:58.718 "params": { 00:34:58.718 "bdev_io_pool_size": 65535, 00:34:58.718 "bdev_io_cache_size": 256, 00:34:58.718 "bdev_auto_examine": true, 00:34:58.718 "iobuf_small_cache_size": 128, 00:34:58.718 "iobuf_large_cache_size": 16 00:34:58.718 } 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "method": "bdev_raid_set_options", 00:34:58.718 "params": { 00:34:58.718 "process_window_size_kb": 1024, 00:34:58.718 "process_max_bandwidth_mb_sec": 0 00:34:58.718 } 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "method": "bdev_iscsi_set_options", 00:34:58.718 "params": { 00:34:58.718 "timeout_sec": 30 00:34:58.718 } 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "method": "bdev_nvme_set_options", 00:34:58.718 "params": { 00:34:58.718 "action_on_timeout": "none", 00:34:58.718 "timeout_us": 0, 00:34:58.718 "timeout_admin_us": 0, 00:34:58.718 "keep_alive_timeout_ms": 10000, 00:34:58.718 "arbitration_burst": 0, 00:34:58.718 "low_priority_weight": 0, 00:34:58.718 "medium_priority_weight": 0, 00:34:58.718 "high_priority_weight": 0, 00:34:58.718 "nvme_adminq_poll_period_us": 10000, 00:34:58.718 "nvme_ioq_poll_period_us": 0, 00:34:58.718 "io_queue_requests": 512, 00:34:58.718 "delay_cmd_submit": true, 00:34:58.718 "transport_retry_count": 4, 00:34:58.718 "bdev_retry_count": 3, 00:34:58.718 "transport_ack_timeout": 0, 00:34:58.718 "ctrlr_loss_timeout_sec": 0, 00:34:58.718 "reconnect_delay_sec": 0, 00:34:58.718 "fast_io_fail_timeout_sec": 0, 00:34:58.718 "disable_auto_failback": false, 00:34:58.718 "generate_uuids": false, 00:34:58.718 "transport_tos": 0, 00:34:58.718 "nvme_error_stat": false, 00:34:58.718 "rdma_srq_size": 0, 00:34:58.718 "io_path_stat": false, 00:34:58.718 "allow_accel_sequence": false, 00:34:58.718 "rdma_max_cq_size": 0, 00:34:58.718 "rdma_cm_event_timeout_ms": 0, 00:34:58.718 "dhchap_digests": [ 00:34:58.718 "sha256", 00:34:58.718 "sha384", 00:34:58.718 "sha512" 00:34:58.718 ], 00:34:58.718 "dhchap_dhgroups": [ 00:34:58.718 "null", 00:34:58.718 "ffdhe2048", 00:34:58.718 "ffdhe3072", 00:34:58.718 "ffdhe4096", 00:34:58.718 "ffdhe6144", 00:34:58.718 "ffdhe8192" 00:34:58.718 ] 00:34:58.718 } 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "method": "bdev_nvme_attach_controller", 00:34:58.718 "params": { 00:34:58.718 "name": "nvme0", 00:34:58.718 "trtype": "TCP", 00:34:58.718 "adrfam": "IPv4", 00:34:58.718 "traddr": "127.0.0.1", 00:34:58.718 "trsvcid": "4420", 00:34:58.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.718 "prchk_reftag": false, 00:34:58.718 "prchk_guard": false, 00:34:58.718 "ctrlr_loss_timeout_sec": 0, 00:34:58.718 "reconnect_delay_sec": 0, 00:34:58.718 "fast_io_fail_timeout_sec": 0, 00:34:58.718 "psk": "key0", 00:34:58.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.718 "hdgst": false, 00:34:58.718 "ddgst": false, 00:34:58.718 "multipath": "multipath" 00:34:58.718 } 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "method": "bdev_nvme_set_hotplug", 00:34:58.718 "params": { 00:34:58.718 "period_us": 100000, 00:34:58.718 "enable": false 00:34:58.718 } 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "method": "bdev_wait_for_examine" 00:34:58.718 } 00:34:58.718 ] 00:34:58.718 }, 00:34:58.718 { 00:34:58.718 "subsystem": "nbd", 00:34:58.718 "config": [] 00:34:58.718 } 00:34:58.718 ] 00:34:58.718 }' 00:34:58.718 17:53:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:58.718 17:53:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:58.978 [2024-11-19 17:53:00.940267] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:34:58.978 [2024-11-19 17:53:00.940321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737426 ] 00:34:58.978 [2024-11-19 17:53:01.016685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.978 [2024-11-19 17:53:01.057985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.236 [2024-11-19 17:53:01.218458] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:59.804 17:53:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.804 17:53:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:59.804 17:53:01 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:59.804 17:53:01 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:59.804 17:53:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:59.804 17:53:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:59.804 17:53:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:59.804 17:53:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:59.804 17:53:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:59.804 17:53:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:59.804 17:53:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:59.804 17:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.064 17:53:02 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:00.064 17:53:02 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:00.064 17:53:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:00.064 17:53:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.064 17:53:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.064 17:53:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:00.064 17:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.323 17:53:02 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:00.323 17:53:02 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:00.323 17:53:02 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:00.323 17:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:00.582 17:53:02 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:00.582 17:53:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:00.582 17:53:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XmZYW3NKXS /tmp/tmp.Bv5jpcyHpD 00:35:00.582 17:53:02 keyring_file -- keyring/file.sh@20 -- # killprocess 3737426 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3737426 ']' 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3737426 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3737426 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3737426' 00:35:00.582 killing process with pid 3737426 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@973 -- # kill 3737426 00:35:00.582 Received shutdown signal, test time was about 1.000000 seconds 00:35:00.582 00:35:00.582 Latency(us) 00:35:00.582 [2024-11-19T16:53:02.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.582 [2024-11-19T16:53:02.805Z] =================================================================================================================== 00:35:00.582 [2024-11-19T16:53:02.805Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:00.582 17:53:02 keyring_file -- common/autotest_common.sh@978 -- # wait 3737426 00:35:00.840 17:53:02 keyring_file -- keyring/file.sh@21 -- # killprocess 3735888 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3735888 ']' 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3735888 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3735888 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:00.840 17:53:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:00.841 17:53:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3735888' 00:35:00.841 killing process with pid 3735888 00:35:00.841 17:53:02 keyring_file -- common/autotest_common.sh@973 -- # kill 3735888 00:35:00.841 17:53:02 keyring_file -- common/autotest_common.sh@978 -- # wait 3735888 00:35:01.098 00:35:01.098 real 0m11.939s 00:35:01.098 user 0m29.700s 00:35:01.098 sys 0m2.770s 00:35:01.098 17:53:03 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.098 17:53:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:01.098 ************************************ 00:35:01.098 END TEST keyring_file 00:35:01.098 ************************************ 00:35:01.098 17:53:03 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:01.099 17:53:03 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:01.099 17:53:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:01.099 17:53:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.099 17:53:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.099 ************************************ 00:35:01.099 START TEST keyring_linux 00:35:01.099 ************************************ 00:35:01.099 17:53:03 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:01.099 Joined session keyring: 823166885 00:35:01.357 * Looking for test storage... 00:35:01.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.357 --rc genhtml_branch_coverage=1 00:35:01.357 --rc genhtml_function_coverage=1 00:35:01.357 --rc genhtml_legend=1 00:35:01.357 --rc geninfo_all_blocks=1 00:35:01.357 --rc geninfo_unexecuted_blocks=1 00:35:01.357 00:35:01.357 ' 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.357 --rc genhtml_branch_coverage=1 00:35:01.357 --rc genhtml_function_coverage=1 00:35:01.357 --rc genhtml_legend=1 00:35:01.357 --rc geninfo_all_blocks=1 00:35:01.357 --rc geninfo_unexecuted_blocks=1 00:35:01.357 00:35:01.357 ' 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.357 --rc genhtml_branch_coverage=1 00:35:01.357 --rc genhtml_function_coverage=1 00:35:01.357 --rc genhtml_legend=1 00:35:01.357 --rc geninfo_all_blocks=1 00:35:01.357 --rc geninfo_unexecuted_blocks=1 00:35:01.357 00:35:01.357 ' 00:35:01.357 17:53:03 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.357 --rc genhtml_branch_coverage=1 00:35:01.357 --rc genhtml_function_coverage=1 00:35:01.357 --rc genhtml_legend=1 00:35:01.357 --rc geninfo_all_blocks=1 00:35:01.357 --rc geninfo_unexecuted_blocks=1 00:35:01.357 00:35:01.357 ' 00:35:01.357 17:53:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:01.357 17:53:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.357 17:53:03 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.357 17:53:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.357 17:53:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.357 17:53:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.357 17:53:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:01.357 17:53:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:01.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:01.357 17:53:03 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:01.357 17:53:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:01.357 17:53:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:01.358 /tmp/:spdk-test:key0 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:01.358 17:53:03 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:01.358 17:53:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:01.358 /tmp/:spdk-test:key1 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3737978 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3737978 00:35:01.358 17:53:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:01.358 17:53:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3737978 ']' 00:35:01.358 17:53:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.358 17:53:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.358 17:53:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.358 17:53:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.358 17:53:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:01.616 [2024-11-19 17:53:03.587255] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:35:01.616 [2024-11-19 17:53:03.587308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737978 ] 00:35:01.616 [2024-11-19 17:53:03.660298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.616 [2024-11-19 17:53:03.703719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:02.552 [2024-11-19 17:53:04.419704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.552 null0 00:35:02.552 [2024-11-19 17:53:04.451752] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:02.552 [2024-11-19 17:53:04.452131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:02.552 957324804 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:02.552 753561014 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3738073 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3738073 /var/tmp/bperf.sock 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3738073 ']' 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:02.552 [2024-11-19 17:53:04.525028] Starting SPDK v25.01-pre git sha1 ea8382642 / DPDK 24.03.0 initialization... 00:35:02.552 [2024-11-19 17:53:04.525075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738073 ] 00:35:02.552 [2024-11-19 17:53:04.599152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.552 [2024-11-19 17:53:04.641166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.552 17:53:04 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:02.552 17:53:04 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:02.552 17:53:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:02.811 17:53:04 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:02.811 17:53:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:03.070 17:53:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:03.070 17:53:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:03.070 [2024-11-19 17:53:05.282395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:03.328 nvme0n1 00:35:03.328 17:53:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:03.328 17:53:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:03.328 17:53:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:03.328 17:53:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:03.328 17:53:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.328 17:53:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:03.587 17:53:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.587 17:53:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:03.587 17:53:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@25 -- # sn=957324804 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 957324804 == \9\5\7\3\2\4\8\0\4 ]] 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 957324804 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:03.587 17:53:05 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:03.845 Running I/O for 1 seconds... 00:35:04.781 21339.00 IOPS, 83.36 MiB/s 00:35:04.781 Latency(us) 00:35:04.781 [2024-11-19T16:53:07.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.781 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:04.781 nvme0n1 : 1.01 21339.45 83.36 0.00 0.00 5978.53 5214.39 11226.60 00:35:04.781 [2024-11-19T16:53:07.004Z] =================================================================================================================== 00:35:04.781 [2024-11-19T16:53:07.004Z] Total : 21339.45 83.36 0.00 0.00 5978.53 5214.39 11226.60 00:35:04.781 { 00:35:04.781 "results": [ 00:35:04.781 { 00:35:04.781 "job": "nvme0n1", 00:35:04.781 "core_mask": "0x2", 00:35:04.781 "workload": "randread", 00:35:04.781 "status": "finished", 00:35:04.781 "queue_depth": 128, 00:35:04.781 "io_size": 4096, 00:35:04.781 "runtime": 1.005977, 00:35:04.781 "iops": 21339.45408294623, 00:35:04.781 "mibps": 83.35724251150872, 00:35:04.781 "io_failed": 0, 00:35:04.781 "io_timeout": 0, 00:35:04.781 "avg_latency_us": 5978.526547643401, 00:35:04.781 "min_latency_us": 5214.3860869565215, 00:35:04.781 "max_latency_us": 11226.601739130434 00:35:04.781 } 00:35:04.781 ], 00:35:04.781 "core_count": 1 00:35:04.781 } 00:35:04.781 17:53:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:04.781 17:53:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:05.040 17:53:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:05.040 17:53:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:05.040 17:53:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:05.040 17:53:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:05.040 17:53:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:05.040 17:53:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.299 17:53:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:05.299 17:53:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:05.299 17:53:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:05.299 17:53:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.299 17:53:07 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:05.299 17:53:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:05.299 [2024-11-19 17:53:07.501498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:05.299 [2024-11-19 17:53:07.502200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eda70 (107): Transport endpoint is not connected 00:35:05.299 [2024-11-19 17:53:07.503195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eda70 (9): Bad file descriptor 00:35:05.299 [2024-11-19 17:53:07.504196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:05.299 [2024-11-19 17:53:07.504206] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:05.299 [2024-11-19 17:53:07.504213] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:05.299 [2024-11-19 17:53:07.504221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:05.299 request: 00:35:05.299 { 00:35:05.299 "name": "nvme0", 00:35:05.299 "trtype": "tcp", 00:35:05.299 "traddr": "127.0.0.1", 00:35:05.299 "adrfam": "ipv4", 00:35:05.299 "trsvcid": "4420", 00:35:05.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.299 "prchk_reftag": false, 00:35:05.299 "prchk_guard": false, 00:35:05.299 "hdgst": false, 00:35:05.299 "ddgst": false, 00:35:05.299 "psk": ":spdk-test:key1", 00:35:05.299 "allow_unrecognized_csi": false, 00:35:05.299 "method": "bdev_nvme_attach_controller", 00:35:05.299 "req_id": 1 00:35:05.299 } 00:35:05.299 Got JSON-RPC error response 00:35:05.299 response: 00:35:05.299 { 00:35:05.299 "code": -5, 00:35:05.299 "message": "Input/output error" 00:35:05.299 } 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@33 -- # sn=957324804 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 957324804 00:35:05.558 1 links removed 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@33 -- # sn=753561014 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 753561014 00:35:05.558 1 links removed 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3738073 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3738073 ']' 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3738073 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3738073 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3738073' 00:35:05.558 killing process with pid 3738073 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 3738073 00:35:05.558 Received shutdown signal, test time was about 1.000000 seconds 00:35:05.558 00:35:05.558 Latency(us) 00:35:05.558 [2024-11-19T16:53:07.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.558 [2024-11-19T16:53:07.781Z] =================================================================================================================== 00:35:05.558 [2024-11-19T16:53:07.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 3738073 00:35:05.558 17:53:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3737978 00:35:05.558 17:53:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3737978 ']' 00:35:05.559 17:53:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3737978 00:35:05.559 17:53:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:05.559 17:53:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.559 17:53:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3737978 00:35:05.817 17:53:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:05.817 17:53:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:05.818 17:53:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3737978' 00:35:05.818 killing process with pid 3737978 00:35:05.818 17:53:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 3737978 00:35:05.818 17:53:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 3737978 00:35:06.077 00:35:06.077 real 0m4.860s 00:35:06.077 user 0m8.870s 00:35:06.077 sys 0m1.472s 00:35:06.077 17:53:08 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.077 17:53:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:06.077 ************************************ 00:35:06.077 END TEST keyring_linux 00:35:06.077 ************************************ 00:35:06.077 17:53:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:06.077 17:53:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:06.077 17:53:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:06.077 17:53:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:06.077 17:53:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:06.077 17:53:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:06.077 17:53:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:06.077 17:53:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.077 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:35:06.077 17:53:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:06.077 17:53:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:06.077 17:53:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:06.077 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.352 INFO: APP EXITING 00:35:11.352 INFO: killing all VMs 00:35:11.352 INFO: killing vhost app 00:35:11.352 INFO: EXIT DONE 00:35:13.888 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:13.888 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:13.888 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:17.178 Cleaning 00:35:17.178 Removing: /var/run/dpdk/spdk0/config 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:17.178 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:17.178 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:17.178 Removing: /var/run/dpdk/spdk1/config 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:17.178 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:17.178 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:17.178 Removing: /var/run/dpdk/spdk2/config 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:17.178 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:17.178 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:17.178 Removing: /var/run/dpdk/spdk3/config 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:17.178 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:17.178 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:17.178 Removing: /var/run/dpdk/spdk4/config 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:17.178 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:17.178 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:17.178 Removing: /dev/shm/bdev_svc_trace.1 00:35:17.178 Removing: /dev/shm/nvmf_trace.0 00:35:17.178 Removing: /dev/shm/spdk_tgt_trace.pid3259041 00:35:17.178 Removing: /var/run/dpdk/spdk0 00:35:17.178 Removing: /var/run/dpdk/spdk1 00:35:17.178 Removing: /var/run/dpdk/spdk2 00:35:17.178 Removing: /var/run/dpdk/spdk3 00:35:17.178 Removing: /var/run/dpdk/spdk4 00:35:17.178 Removing: /var/run/dpdk/spdk_pid3256891 00:35:17.178 Removing: /var/run/dpdk/spdk_pid3257957 00:35:17.178 Removing: /var/run/dpdk/spdk_pid3259041 00:35:17.178 Removing: /var/run/dpdk/spdk_pid3259676 00:35:17.178 Removing: /var/run/dpdk/spdk_pid3260615 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3260660 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3261690 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3261846 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3262156 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3263710 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3264994 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3265340 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3265573 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3266074 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3266214 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3266450 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3266696 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3266987 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3267727 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3270831 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3271118 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3271247 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3271413 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3271766 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3271969 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3272276 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3272465 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3272733 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3272738 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3272995 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3273011 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3273571 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3273821 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3274120 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3277823 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3282234 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3292404 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3293036 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3297481 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3297889 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3302411 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3308297 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3311086 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3321266 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3330268 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3331886 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3332823 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3350429 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3354501 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3400006 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3405404 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3411169 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3417710 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3417797 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3418587 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3419499 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3420416 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3420888 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3420938 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3421249 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3421345 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3421355 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3422268 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3423179 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3423999 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3424577 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3424588 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3424818 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3425916 00:35:17.179 Removing: /var/run/dpdk/spdk_pid3426956 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3435151 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3464502 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3469006 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3470613 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3472447 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3472525 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3472742 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3473036 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3473566 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3475709 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3476557 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3477034 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3479169 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3479653 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3480375 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3484491 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3490059 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3490060 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3490061 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3493838 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3502188 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3505996 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3511980 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3513285 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3514611 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3516149 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3521266 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3525548 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3529725 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3537108 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3537110 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3541820 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3542049 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3542277 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3542646 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3542739 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3547217 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3547677 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3552141 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3554678 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3560072 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3565405 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3574715 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3581809 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3581859 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3600574 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3601200 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3601675 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3602157 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3602884 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3603396 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3604043 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3604524 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3608718 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3609001 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3614945 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3615122 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3620986 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3625353 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3635090 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3635607 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3639814 00:35:17.438 Removing: /var/run/dpdk/spdk_pid3640065 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3644328 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3650188 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3652771 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3662814 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3671878 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3673495 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3674400 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3690732 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3694552 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3697242 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3705057 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3705156 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3710233 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3712655 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3714647 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3715732 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3717703 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3718778 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3727527 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3728146 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3728656 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3730922 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3731392 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3731865 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3735888 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3735907 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3737426 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3737978 00:35:17.697 Removing: /var/run/dpdk/spdk_pid3738073 00:35:17.697 Clean 00:35:17.697 17:53:19 -- common/autotest_common.sh@1453 -- # return 0 00:35:17.697 17:53:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:17.697 17:53:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:17.697 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:35:17.697 17:53:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:17.697 17:53:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:17.697 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:35:17.956 17:53:19 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:17.956 17:53:19 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:17.956 17:53:19 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:17.956 17:53:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:17.956 17:53:19 -- spdk/autotest.sh@398 -- # hostname 00:35:17.956 17:53:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:17.956 geninfo: WARNING: invalid characters removed from testname! 00:35:39.888 17:53:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:41.888 17:53:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:43.792 17:53:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:45.694 17:53:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:47.606 17:53:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:49.506 17:53:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.411 17:53:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:51.411 17:53:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:51.411 17:53:53 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:51.411 17:53:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:51.411 17:53:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:51.411 17:53:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:51.411 + [[ -n 3179430 ]] 00:35:51.411 + sudo kill 3179430 00:35:51.422 [Pipeline] } 00:35:51.437 [Pipeline] // stage 00:35:51.442 [Pipeline] } 00:35:51.457 [Pipeline] // timeout 00:35:51.461 [Pipeline] } 00:35:51.475 [Pipeline] // catchError 00:35:51.481 [Pipeline] } 00:35:51.503 [Pipeline] // wrap 00:35:51.509 [Pipeline] } 00:35:51.521 [Pipeline] // catchError 00:35:51.530 [Pipeline] stage 00:35:51.532 [Pipeline] { (Epilogue) 00:35:51.544 [Pipeline] catchError 00:35:51.546 [Pipeline] { 00:35:51.557 [Pipeline] echo 00:35:51.559 Cleanup processes 00:35:51.565 [Pipeline] sh 00:35:51.852 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:51.852 3748683 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:51.868 [Pipeline] sh 00:35:52.202 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:52.202 ++ grep -v 'sudo pgrep' 00:35:52.202 ++ awk '{print $1}' 00:35:52.202 + sudo kill -9 00:35:52.202 + true 00:35:52.215 [Pipeline] sh 00:35:52.506 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:04.729 [Pipeline] sh 00:36:05.015 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:05.015 Artifacts sizes are good 00:36:05.030 [Pipeline] archiveArtifacts 00:36:05.037 Archiving artifacts 00:36:05.157 [Pipeline] sh 00:36:05.443 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:05.459 [Pipeline] cleanWs 00:36:05.470 [WS-CLEANUP] Deleting project workspace... 00:36:05.470 [WS-CLEANUP] Deferred wipeout is used... 00:36:05.477 [WS-CLEANUP] done 00:36:05.479 [Pipeline] } 00:36:05.498 [Pipeline] // catchError 00:36:05.511 [Pipeline] sh 00:36:05.796 + logger -p user.info -t JENKINS-CI 00:36:05.806 [Pipeline] } 00:36:05.820 [Pipeline] // stage 00:36:05.825 [Pipeline] } 00:36:05.840 [Pipeline] // node 00:36:05.845 [Pipeline] End of Pipeline 00:36:05.885 Finished: SUCCESS